Hi all,
I want to know the precision (number of significant digits) of a float
in a platform-independent manner. I have scoured through the docs but
I can't find anything about it!
At the moment I use this terrible substitute:
FLOAT_PREC = repr(1.0/3).count('3')
How can I do this properly or where is relevant part of the docs?
Thanks
--
Arnaud 23 2201
On Feb 25, 9:57 pm, "Arnaud Delobelle" <arno...@googlemail.comwrote:
Hi all,
I want to know the precision (number of significant digits) of a float
in a platform-independent manner. I have scoured through the docs but
I can't find anything about it!
At the moment I use this terrible substitute:
FLOAT_PREC = repr(1.0/3).count('3')
I'm a little puzzled:
You don't seem to want a function that will tell you the actual number
of significant decimal digits in a particular number e.g.
nsig(12300.0) -3
nsig(0.00123400) -4
etc
You appear to be trying to determine what is the maximum number of
significant decimal digits afforded by the platform's implementation
of Python's float type. Is Python implemented on a platform that
*doesn't* use IEEE 754 64-bit FP as the in-memory format for floats?
Cheers,
John
On Feb 25, 11:20 am, "John Machin" <sjmac...@lexicon.netwrote:
[...]
I'm a little puzzled:
You don't seem to want a function that will tell you the actual number
of significant decimal digits in a particular number e.g.
nsig(12300.0) -3
nsig(0.00123400) -4
etc
You appear to be trying to determine what is the maximum number of
significant decimal digits afforded by the platform's implementation
of Python's float type.
Yes you are correct.
Is Python implemented on a platform that
*doesn't* use IEEE 754 64-bit FP as the in-memory format for floats?
I had no knowledge of IEEE 754 64-bit FP. The python doc says that
floats are implemented using the C 'double' data type but I didn't
realise there was a standard for this accross platforms .
Thanks for clarifying this. As my question shows I am not versed in
floating point arithmetic!
Looking at the definition of IEEE 754, the mantissa is made of 53
significant binary digits, which means
53*log10(2) = 15.954589770191003 significant decimal digits
(I got 16 with my previous dodgy calculation).
Does it mean it is safe to assume that this would hold on any
platform?
--
Arnaud
On Feb 25, 11:06 pm, "Arnaud Delobelle" <arno...@googlemail.com>
wrote:
On Feb 25, 11:20 am, "John Machin" <sjmac...@lexicon.netwrote:
[...]
I'm a little puzzled:
You don't seem to want a function that will tell you the actual number
of significant decimal digits in a particular number e.g.
nsig(12300.0) -3
nsig(0.00123400) -4
etc
You appear to be trying to determine what is the maximum number of
significant decimal digits afforded by the platform's implementation
of Python's float type.
Yes you are correct.
Is Python implemented on a platform that
*doesn't* use IEEE 754 64-bit FP as the in-memory format for floats?
I had no knowledge of IEEE 754 64-bit FP. The python doc says that
floats are implemented using the C 'double' data type but I didn't
realise there was a standard for this accross platforms .
Thanks for clarifying this. As my question shows I am not versed in
floating point arithmetic!
Looking at the definition of IEEE 754, the mantissa is made of 53
significant binary digits, which means
53*log10(2) = 15.954589770191003 significant decimal digits
(I got 16 with my previous dodgy calculation).
Does it mean it is safe to assume that this would hold on any
platform?
Evidently not; here's some documentation we both need(ed) to read: http://docs.python.org/tut/node16.html
"""
Almost all machines today (November 2000) use IEEE-754 floating point
arithmetic, and almost all platforms map Python floats to IEEE-754
"double precision".
"""
I'm very curious to know what the exceptions were in November 2000 and
if they still exist. There is also the question of how much it matters
to you. Presuming the representation is 64 bits, even taking 3 bits
off the mantissa and donating them to the exponent leaves you with
15.05 decimal digits -- perhaps you could assume that you've got at
least 15 decimal digits.
While we're waiting for the gurus to answer, here's a routine that's
slightly less dodgy than yours:
| >>for n in range(200):
| ... if (1.0 + 1.0/2**n) == 1.0:
| ... print n, "bits"
| ... break
| ...
| 53 bits
At least this method has no dependency on the platform's C library.
Note carefully the closing words of that tutorial section:
"""
(well, will display on any 754-conforming platform that does best-
possible input and output conversions in its C library -- yours may
not!).
"""
I hope some of this helps ...
Cheers,
John
On Feb 25, 1:31 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Feb 25, 11:06 pm, "Arnaud Delobelle" <arno...@googlemail.com>
wrote:
[...]
Evidently not; here's some documentation we both need(ed) to read:
http://docs.python.org/tut/node16.html
Thanks for this link
I'm very curious to know what the exceptions were in November 2000 and
if they still exist. There is also the question of how much it matters
to you. Presuming the representation is 64 bits, even taking 3 bits
off the mantissa and donating them to the exponent leaves you with
15.05 decimal digits -- perhaps you could assume that you've got at
least 15 decimal digits.
It matters to me because I prefer to avoid making assumptions in my
code!
Moreover the reason I got interested in this is because I am creating
a
Dec class (decimal numbers) and I would like that:
* given a float x, float(Dec(x)) == x for as many values of x as
possible
* given a decimal d, Dec(float(d)) == d for as many values of d as
possible.
(and I don't want the standard Decimal class :)
While we're waiting for the gurus to answer, here's a routine that's
slightly less dodgy than yours:
| >>for n in range(200):
| ... if (1.0 + 1.0/2**n) == 1.0:
| ... print n, "bits"
| ... break
| ...
| 53 bits
Yes I have a similar routine:
|def fptest():
| "Gradually fill the mantissa of a float with 1s until running out
of bits"
| ix = 0
| fx = 0.0
| for i in range(200):
| fx = 2*fx+1
| ix = 2*ix+1
| if ix != fx:
| return i
....
>>fptest()
53
I guess it would be OK to use this. It's just that this 'poking into
floats'
seems a bit strange to me ;)
Thanks for your help
--
Arnaud
On 25 Feb 2007 06:11:02 -0800, Arnaud Delobelle <ar*****@googlemail.comwrote:
Moreover the reason I got interested in this is because I am creating
a Dec class (decimal numbers)
Are you familiar with Python's Decimal library? http://docs.python.org/lib/module-decimal.html
--
Jerry
On Feb 25, 3:59 pm, "Jerry Hill" <malaclyp...@gmail.comwrote:
On 25 Feb 2007 06:11:02 -0800, Arnaud Delobelle <arno...@googlemail.comwrote:
Moreover the reason I got interested in this is because I am creating
a Dec class (decimal numbers)
Are you familiar with Python's Decimal library?http://docs.python.org/lib/module-decimal.html
Read on my post for a few more lines and you'll know that the answer
is yes :)
--
Arnaud
John Machin wrote:
Evidently not; here's some documentation we both need(ed) to read:
http://docs.python.org/tut/node16.html
"""
Almost all machines today (November 2000) use IEEE-754 floating point
arithmetic, and almost all platforms map Python floats to IEEE-754
"double precision".
"""
I'm very curious to know what the exceptions were in November 2000 and
if they still exist.
All Python interpreters use whatever is the C double type on its platform to
represent floats. Not all of the platforms use *IEEE-754* floating point types.
The most notable and still relevant example that I know of is Cray.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
Dennis Lee Bieber wrote:
On 25 Feb 2007 05:31:11 -0800, "John Machin" <sj******@lexicon.net>
declaimed the following in comp.lang.python:
>Evidently not; here's some documentation we both need(ed) to read:
http://docs.python.org/tut/node16.html """ Almost all machines today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 "double precision". """ I'm very curious to know what the exceptions were in November 2000 and if they still exist. There is also the question of how much it matters
Maybe a few old Vaxes/Alphas running OpenVMS... Those machines had
something like four or five different floating point representations (F,
D, G, and H, that I recall -- single, double, double with extended
exponent range, and quad)
I actually used Python on an Alpha running OpenVMS a few years ago. IIRC, the
interpreter was built with IEEE floating point types rather than the other types.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
On Feb 27, 1:36 pm, Facundo Batista <facu...@taniquetil.com.arwrote:
Arnaud Delobelle wrote:
(and I don't want the standard Decimal class :)
Why?
Why should you? It only gives you 28 significant digits, while 64-bit
float (as in 32-bit version of Python) gives you 53 significant
digits. Also note, that on x86 FPU uses 80-bit registers. An then
Decimal executes over 1500 times slower.
>>from timeit import Timer t1 = Timer('(1.0/3.0)*3.0 - 1.0') t2 = Timer('(Decimal(1)/Decimal(3))*Decimal(3)-Decimal(1)',
'from decimal import Decimal')
>>t2.timeit()/t1.timeit()
1621.7838879255889
If that's not enough to forget about Decimal, take a look at this:
>>(Decimal(1)/Decimal(3))*Decimal(3) == Decimal(1)
False
>>((1.0/3.0)*3.0) == 1.0
True
On 27 Feb, 14:09, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
On Feb 27, 1:36 pm, Facundo Batista <facu...@taniquetil.com.arwrote:
Arnaud Delobelle wrote:
(and I don't want the standard Decimal class :)
Why?
Why should you? It only gives you 28 significant digits, while 64-bit
float (as in 32-bit version of Python) gives you 53 significant
digits. Also note, that on x86 FPU uses 80-bit registers. An then
Decimal executes over 1500 times slower.
Actually 28 significant digits is the default, it can be set to
anything you like. Moreover 53 significant bits (as this is what 53
counts) is about 16 decimal digits.
>from timeit import Timer t1 = Timer('(1.0/3.0)*3.0 - 1.0') t2 = Timer('(Decimal(1)/Decimal(3))*Decimal(3)-Decimal(1)',
'from decimal import Decimal')>>t2.timeit()/t1.timeit()
1621.7838879255889
Yes. The internal representation of a Decimal is a tuple of one-digit
strings!
This is one of the reasons (by no means the main) why I decided to
write my own class.
If that's not enough to forget about Decimal, take a look at this:
>(Decimal(1)/Decimal(3))*Decimal(3) == Decimal(1)
False
>((1.0/3.0)*3.0) == 1.0
True
OTOH float is not the panacea:
>>0.1+0.1+0.1==0.3
False
>>3*0.1==0.3
False
Decimals will behave better in this case.
Cheers
--
Arnaud
Why should you? It only gives you 28 significant digits, while 64-bit
float (as in 32-bit version of Python) gives you 53 significant
digits. Also note, that on x86 FPU uses 80-bit registers. An then
Decimal executes over 1500 times slower.
64-bit floating point only gives you 53 binary bits, not 53 digits.
That's approximately 16 decimal digits. And anyway, Decimal can be
configured to support than 28 digits.
>
>from timeit import Timer t1 = Timer('(1.0/3.0)*3.0 - 1.0') t2 = Timer('(Decimal(1)/Decimal(3))*Decimal(3)-Decimal(1)',
'from decimal import Decimal')>>t2.timeit()/t1.timeit()
1621.7838879255889
If that's not enough to forget about Decimal, take a look at this:
>(Decimal(1)/Decimal(3))*Decimal(3) == Decimal(1)
False
>((1.0/3.0)*3.0) == 1.0
True
Try ((15.0/11.0)*11.0) == 15.0. Decimal is actually returning the
correct result. Your example was just lucky.
Decimal was intended to solve a different class of problems. It
provides predictable arithmetic using "decimal" floating point.
IEEE-754 provides predictable arithmetic using "binary" floating
point.
casevh
On Feb 27, 7:58 pm, "Arnaud Delobelle" <arno...@googlemail.comwrote:
On 27 Feb, 14:09, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
On Feb 27, 1:36 pm, Facundo Batista <facu...@taniquetil.com.arwrote:
Arnaud Delobelle wrote:
(and I don't want the standard Decimal class :)
Why?
Why should you? It only gives you 28 significant digits, while 64-bit
float (as in 32-bit version of Python) gives you 53 significant
digits. Also note, that on x86 FPU uses 80-bit registers. An then
Decimal executes over 1500 times slower.
Actually 28 significant digits is the default, it can be set to
anything you like. Moreover 53 significant bits (as this is what 53
counts) is about 16 decimal digits.
My mistake.
>>from timeit import Timer
>>t1 = Timer('(1.0/3.0)*3.0 - 1.0')
>>t2 = Timer('(Decimal(1)/Decimal(3))*Decimal(3)-Decimal(1)',
'from decimal import Decimal')>>t2.timeit()/t1.timeit()
1621.7838879255889
Yes. The internal representation of a Decimal is a tuple of one-digit
strings!
This is one of the reasons (by no means the main) why I decided to
write my own class.
Why not GMP?
If that's not enough to forget about Decimal, take a look at this:
>>(Decimal(1)/Decimal(3))*Decimal(3) == Decimal(1)
False
>>((1.0/3.0)*3.0) == 1.0
True
OTOH float is not the panacea:
My point is, that neither is Decimal. It doesn't solve the problem,
creating additional problem with efficiency.
>0.1+0.1+0.1==0.3
False
>3*0.1==0.3
False
Decimals will behave better in this case.
Decimal will work fine as long as you deal only with decimal numbers
and the most basic arithmetic. Any rational number with base that is
not power of 10, or any irrational number won't have an exact
representation. So as long as you're dealing with something like
invoices, Decimal does just fine. When you start real calculations,
not only scientific, but even financial ones[1], it doesn't do any
better then binary float, and it's bloody slow.
[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0). It's not rational number (unless days
happen to be mutliple of 365), therefore has no exact representation.
BTW, math.* functions do not return Decimals.
On Feb 28, 10:38 pm, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0).
In what jurisdiction for what types of transactions? I would have
thought/hoped that the likelihood that any law, standard or procedure
manual would define an interest calculation in terms of the C stdlib
would be somewhere in the region of math.pow(epsilon, HUGE), not
"often".
More importantly, the formula you give is dead wrong. The correct
formula for converting an annual rate of interest to the rate of
interest to be used for n days (on the basis of 365 days per year) is:
(1 + annual_rate) ** (n / 365.0) - 1.0
or
math.pow(1 + annual_rate, n / 365.0) - 1.0
if you prefer.
BTW, math.* functions do not return Decimals.
For which Bell Labs be praised, but what's your point?
Cheers,
John
On Feb 28, 3:53 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Feb 28, 10:38 pm, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0).
In what jurisdiction for what types of transactions? I would have
thought/hoped that the likelihood that any law, standard or procedure
manual would define an interest calculation in terms of the C stdlib
would be somewhere in the region of math.pow(epsilon, HUGE), not
"often".
YPB? Have you ever heard of real-time systems?
More importantly, the formula you give is dead wrong. The correct
formula for converting an annual rate of interest to the rate of
interest to be used for n days (on the basis of 365 days per year) is:
(1 + annual_rate) ** (n / 365.0) - 1.0
or
math.pow(1 + annual_rate, n / 365.0) - 1.0
if you prefer.
YPB? Anyone with half a brain knows, that you can either express rate
as 0.07 and do all those ridiculous conversions above, or express it
as 1.07 and apply it directly.
BTW, math.* functions do not return Decimals.
For which Bell Labs be praised, but what's your point?
That Decimal is useless, for anything but invoices.
On 28 Feb, 11:38, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
On Feb 27, 7:58 pm, "Arnaud Delobelle" <arno...@googlemail.comwrote:
This is one of the reasons (by no means the main) why I decided to
write my own class.
Why not GMP?
I need decimals.
My point is, that neither is Decimal. It doesn't solve the problem,
creating additional problem with efficiency.
>>0.1+0.1+0.1==0.3
False
>>3*0.1==0.3
False
Decimals will behave better in this case.
Decimal will work fine as long as you deal only with decimal numbers
and the most basic arithmetic. Any rational number with base that is
not power of 10, or any irrational number won't have an exact
representation.
My problem is precisely to represent rational numbers whose
denominator is a power of 10 (aka decimals) accurately.
So as long as you're dealing with something like
invoices, Decimal does just fine. When you start real calculations,
not only scientific, but even financial ones[1], it doesn't do any
better then binary float, and it's bloody slow.
I'm not doing 'real world' calcultations, I'm making an app to help
teach children maths. I need numerical values that behave well as
decimals. I also need them to have an arbitrary number of significant
figures. Floats are great but they won't help me with either. Amongst
other things, I need 3.0*0.1==0.3 to be True.
Please do not make the assumption that I have chosen to use a decimal
type without some careful consideration.
--
Arnaud
On Feb 28, 6:34 pm, "Arnaud Delobelle" <arno...@googlemail.comwrote:
So as long as you're dealing with something like
invoices, Decimal does just fine. When you start real calculations,
not only scientific, but even financial ones[1], it doesn't do any
better then binary float, and it's bloody slow.
I'm not doing 'real world' calcultations, I'm making an app to help
teach children maths.
Without divisions?
I need numerical values that behave well as
decimals. I also need them to have an arbitrary number of significant
figures. Floats are great but they won't help me with either. Amongst
other things, I need 3.0*0.1==0.3 to be True.
How about (1.0/3.0)*3.0 == 1.0? That doesn't need to be True?
Please do not make the assumption that I have chosen to use a decimal
type without some careful consideration.
Well, you didn't indicate this before. Anyway, in that case efficiency
has no impact at all, so you might as well use Decimal.
Arnaud Delobelle wrote:
I'm not doing 'real world' calcultations, I'm making an app to help
teach children maths. I need numerical values that behave well as
decimals. I also need them to have an arbitrary number of significant
figures. Floats are great but they won't help me with either. Amongst
other things, I need 3.0*0.1==0.3 to be True.
I guess that speed is not at premium in your application,
so you might try my continued fractions module,
advertised here a few weeks ago: http://www-zo.iinf.polsl.gliwice.pl/...software/cf.py
It represents rational numbers exactly, and irrational numbers
with an arbitrary accuracy, i.e. unlike Decimal it implements
exact rather than multiple-precision arithmetic.
Once you create an expression, you can pull from it as many
decimal digits as you wish. The subexpressions dynamically
adjust their accuracy, so that you always get exact digits.
Regards,
Marcin
On Feb 28, 7:28 pm, Marcin Ciura <marcin.ci...@poczta.NOSPAMonet.pl>
wrote:
I guess that speed is not at premium in your application,
so you might try my continued fractions module,
advertised here a few weeks ago:http://www-zo.iinf.polsl.gliwice.pl/...software/cf.py
Thanks for the link, I've had a quick glance and it seems great. I'll
have to take a closer look to see if I can use it for this app, but in
general it looks like a very useful piece of code.
--
Arnaud
On Mar 1, 4:19 am, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
On Feb 28, 3:53 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Feb 28, 10:38 pm, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0).
More importantly, the formula you give is dead wrong. The correct
formula for converting an annual rate of interest to the rate of
interest to be used for n days (on the basis of 365 days per year) is:
(1 + annual_rate) ** (n / 365.0) - 1.0
or
math.pow(1 + annual_rate, n / 365.0) - 1.0
if you prefer.
YPB? Anyone with half a brain knows, that you can either express rate
as 0.07 and do all those ridiculous conversions above, or express it
as 1.07 and apply it directly.
A conversion involving an exponentiation is necessary. "All those"?? I
see only two.
Please re-read your original post, and note that there are *TWO* plus-
or-minus 1.0 differences between your formula and mine. For an annual
rate of 10%, yours would calculate the rate for 6 months (expressed as
182.5 days) as:
math.pow(0.10, 0.5) = 0.316... i.e. 31.6%
My formula produces:
math.pow(1.10, 0.5) - 1.0 = 0.0488... i.e. 4.88%.
Which answer is ridiculous?
On Feb 28, 10:29 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Mar 1, 4:19 am, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
On Feb 28, 3:53 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Feb 28, 10:38 pm, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0).
More importantly, the formula you give is dead wrong. The correct
formula for converting an annual rate of interest to the rate of
interest to be used for n days (on the basis of 365 days per year) is:
(1 + annual_rate) ** (n / 365.0) - 1.0
or
math.pow(1 + annual_rate, n / 365.0) - 1.0
if you prefer.
YPB? Anyone with half a brain knows, that you can either express rate
as 0.07 and do all those ridiculous conversions above, or express it
as 1.07 and apply it directly.
A conversion involving an exponentiation is necessary. "All those"?? I
see only two.
Please re-read your original post, and note that there are *TWO* plus-
or-minus 1.0 differences between your formula and mine. For an annual
rate of 10%, yours would calculate the rate for 6 months (expressed as
182.5 days) as:
math.pow(0.10, 0.5) = 0.316... i.e. 31.6%
You're assuming that I'd store annual rate as 0.1, while actually it'd
stored as 1.1.
Which is logical and applicable directly.
On Mar 1, 9:33 pm, "Bart Ogryczak" <B.Ogryc...@gmail.comwrote:
On Feb 28, 10:29 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Mar 1, 4:19 am, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
On Feb 28, 3:53 pm, "John Machin" <sjmac...@lexicon.netwrote:
On Feb 28, 10:38 pm, "BartOgryczak" <B.Ogryc...@gmail.comwrote:
[1] eg. consider calculating interests rate, which often is defined as
math.pow(anualRate,days/365.0).
More importantly, the formula you give is dead wrong. The correct
formula for converting an annual rate of interest to the rate of
interest to be used for n days (on the basis of 365 days per year) is:
(1 + annual_rate) ** (n / 365.0) - 1.0
or
math.pow(1 + annual_rate, n / 365.0) - 1.0
if you prefer.
YPB? Anyone with half a brain knows, that you can either express rate
as 0.07 and do all those ridiculous conversions above, or express it
as 1.07 and apply it directly.
A conversion involving an exponentiation is necessary. "All those"?? I
see only two.
Please re-read your original post, and note that there are *TWO* plus-
or-minus 1.0 differences between your formula and mine. For an annual
rate of 10%, yours would calculate the rate for 6 months (expressed as
182.5 days) as:
math.pow(0.10, 0.5) = 0.316... i.e. 31.6%
You're assuming that I'd store annual rate as 0.1, while actually it'd
stored as 1.1.
Which is logical and applicable directly.
Storing 1.1 and using it in calculations may save you a few
microseconds a day in your real-time apps. However the annual rate of
interest is 10% aka 0.1; naming 1.1 as "anualRate" (sic) is utterly
ludicrous.
John Machin wrote:
Storing 1.1 and using it in calculations may save you a few
microseconds a day in your real-time apps.
The main advantage would be clarity of code.
naming 1.1 as "anualRate" (sic) is utterly ludicrous.
So call it annualMultiplicationFactor or something
in the code.
--
Greg This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Thinkit |
last post by:
Are there any good libraries of arbitrary precision binary floats?
I'd like to specify the mantissa length and exponent range.
Hexadecimal output would be best (decimal is disgusting with binary...
|
by: Ladvánszky Károly |
last post by:
Entering 3.4 in Python yields 3.3999999999999999.
I know it is due to the fact that 3.4 can not be precisely expressed by the
powers of 2. Can the float handling rules of the underlying layers be...
|
by: {AGUT2} {H}-IWIK |
last post by:
Hi,
Is it possible to compare / output two numbers to a certain precision?
for instance:
#include <stdlib>
#include <math>
#include <iostream>
#include <iomanip>
|
by: Brian van den Broek |
last post by:
Hi all,
I guess it is more of a maths question than a programming one, but it
involves use of the decimal module, so here goes:
As a self-directed learning exercise I've been working on a...
|
by: ma740988 |
last post by:
I've got an unpacker that unpacks a 32 bit word into 3-10 bits samples.
Bits 0 and 1 are dont cares. For the purposes of perfoming an FFT and
an inverse FFT, I cast the 10 bit values into doubles....
|
by: DAVID SCHULMAN |
last post by:
I've been trying to perform a calculation that has been running into
an underflow (insufficient precision) problem in Microsoft Excel, which
calculates using at most 15 significant digits. For this...
|
by: Anton81 |
last post by:
Hi!
When I do simple calculation with float values, they are rarely exactly
equal even if they should be. What is the threshold and how can I change
it?
e.g. "if f1==f2:" will always mean "if...
|
by: graham.macpherson |
last post by:
I have 2 Suse Linux PCs which I compile my code on. Until recently
they both had gcc 4.0.X on them, but I upgraded one of them to gcc
4.1.0. I have come across a very strange problem in the...
|
by: Grant Edwards |
last post by:
I'm pretty sure the answer is "no", but before I give up on the
idea, I thought I'd ask...
Is there any way to do single-precision floating point
calculations in Python?
I know the various...
|
by: Charles Arthur |
last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
|
by: emmanuelkatto |
last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud.
Please let me know.
Thanks!
Emmanuel
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Sonnysonu |
last post by:
This is the data of csv file
1 2 3
1 2 3
1 2 3
1 2 3
2 3
2 3
3
the lengths should be different i have to store the data by column-wise with in the specific length.
suppose the i have to...
|
by: Hystou |
last post by:
There are some requirements for setting up RAID:
1. The motherboard and BIOS support RAID configuration.
2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
|
by: agi2029 |
last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome a new...
| |