By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,621 Members | 1,104 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,621 IT Pros & Developers. It's quick & easy.

128 or 96 bit integer types?

P: n/a
Hi,

Is there build-in or third party support for large integer types, such
as 96 or 128 bits in size? I require such large sizes for precision
issues (nanoseconds). Thanks.

Jul 27 '07 #1
Share this Question
Share on Google+
10 Replies


P: n/a
On Fri, 27 Jul 2007 16:45:05 +0000, Robert Dailey wrote:
Is there build-in or third party support for large integer types, such
as 96 or 128 bits in size?
Yes there is, just use integer values. If it don't fit into an `int` it
gets promoted to a `long`. Python `long`\s are only bounded by available
memory.

In [59]: 2**128
Out[59]: 340282366920938463463374607431768211456L

Ciao,
Marc 'BlackJack' Rintsch
Jul 27 '07 #2

P: n/a
Robert Dailey wrote:
Is there build-in or third party support for large integer types, such
as 96 or 128 bits in size? I require such large sizes for precision
issues (nanoseconds). Thanks.
>>SECOND = 10**9
YEAR = 365*24*60*60
2**128/SECOND/YEAR
10790283070806014188970L

What are you measuring? The age of the universe? In nanoseconds?

:-)

Peter
Jul 27 '07 #3

P: n/a
On Jul 27, 1:27 pm, Peter Otten <__pete...@web.dewrote:
Robert Dailey wrote:
Is there build-in or third party support for large integer types, such
as 96 or 128 bits in size? I require such large sizes for precision
issues (nanoseconds). Thanks.
>SECOND = 10**9
YEAR = 365*24*60*60
2**128/SECOND/YEAR

10790283070806014188970L

What are you measuring? The age of the universe? In nanoseconds?

:-)
Well, 2**96 would only be 2512308552583 years.
>
Peter

Jul 27 '07 #4

P: n/a
"me********@aol.com" <me********@aol.comwrote:
>
On Jul 27, 1:27 pm, Peter Otten <__pete...@web.dewrote:
>Robert Dailey wrote:
Is there build-in or third party support for large integer types, such
as 96 or 128 bits in size? I require such large sizes for precision
issues (nanoseconds). Thanks.
SECOND = 10**9
YEAR = 365*24*60*60
2**128/SECOND/YEAR

10790283070806014188970L

What are you measuring? The age of the universe? In nanoseconds?

:-)

Well, 2**96 would only be 2512308552583 years.
Yes, but that's still roughly 100 times the estimated age of the universe.
--
Tim Roberts, ti**@probo.com
Providenza & Boekelheide, Inc.
Jul 28 '07 #5

P: n/a
"me********@aol.com" <me********@aol.comwrites:
has 146 digits. And that's just the begining. The above
actually represents a polynomial with 264 terms, the
exponents of which range from 0 to 492. One of those
polynomials can have over 50000 decimal digits when
solved.
You should use gmpy rather than python longs if you're dealing with
numbers of that size. Python multiplication uses a straightforward
O(n**2) algorithm where n is the number of digits. This is the best
way for up to a few hundred or maybe a few thousand digits. After
that, it's better to use more complicated FFT-based algorithms which
are O(n log n) despite their increased constant overhead. Gmpy does this.
Jul 28 '07 #6

P: n/a
On Jul 28, 2:28?am, Paul Rubin <http://phr...@NOSPAM.invalidwrote:
"mensana...@aol.com" <mensana...@aol.comwrites:
has 146 digits. And that's just the begining. The above
actually represents a polynomial with 264 terms, the
exponents of which range from 0 to 492. One of those
polynomials can have over 50000 decimal digits when
solved.

You should use gmpy rather than python longs if you're dealing with
numbers of that size. Python multiplication uses a straightforward
O(n**2) algorithm where n is the number of digits. This is the best
way for up to a few hundred or maybe a few thousand digits. After
that, it's better to use more complicated FFT-based algorithms which
are O(n log n) despite their increased constant overhead. Gmpy does this.
I actually do use gmpy. Great stuff. But one thing I
learned about gmpy is to never use literals inside
a loop. Otherwise the mpz coercion has to be done
every time and that kills performance.

So you would do something like

import gmpy
ONE = gmpy.mpz(1)
TWO = gmpy.mpz(2)
TWE = gmpy.mpz(3)

n = gmpy.mpz(2**177149-1)

while n ONE:
if n % TWO == 1
n = TWE*n + ONE
else:
n = n/TWO

Jul 28 '07 #7

P: n/a
Paul Rubin wrote:
"me********@aol.com" <me********@aol.comwrites:
>has 146 digits. And that's just the begining. The above
actually represents a polynomial with 264 terms, the
exponents of which range from 0 to 492. One of those
polynomials can have over 50000 decimal digits when
solved.

You should use gmpy rather than python longs if you're dealing with
numbers of that size.
Python multiplication uses a straightforward
O(n**2) algorithm where n is the number of digits.
That's untrue since quite a while. CPython now uses
Karatsuba-multiplication if the number of digits is larger than a
certain threshold. Karatsuba is O(n**(log(3) / log(2)).
This is the best
way for up to a few hundred or maybe a few thousand digits. After
that, it's better to use more complicated FFT-based algorithms which
are O(n log n) despite their increased constant overhead. Gmpy does this.
Karatsuba is still slower than these algorithms, but only if you have
quite a big number of digits. Of course the core of your argument
remains valid: CPython is not up to providing extremely good big integer
arithmetic, so if you have extreme needs, you shouldn't use the builtin
longs.

Cheers,

Carl Friedrich Bolz
Jul 29 '07 #8

P: n/a
Paul Rubin wrote:
"me********@aol.com" <me********@aol.comwrites:
>has 146 digits. And that's just the begining. The above
actually represents a polynomial with 264 terms, the
exponents of which range from 0 to 492. One of those
polynomials can have over 50000 decimal digits when
solved.

You should use gmpy rather than python longs if you're dealing with
numbers of that size.
Python multiplication uses a straightforward
O(n**2) algorithm where n is the number of digits.
That's untrue since quite a while. CPython now uses
Karatsuba-multiplication if the number of digits is larger than a
certain threshold. Karatsuba is O(n**(log(3) / log(2)).
This is the best
way for up to a few hundred or maybe a few thousand digits. After
that, it's better to use more complicated FFT-based algorithms which
are O(n log n) despite their increased constant overhead. Gmpy does this.
Karatsuba is still slower than these algorithms, but only if you have
quite a big number of digits. Of course the core of your argument
remains valid: CPython is not up to providing extremely good big integer
arithmetic, so if you have extreme needs, you shouldn't use the builtin
longs.

Cheers,

Carl Friedrich Bolz

Jul 29 '07 #9

P: n/a
In article <11*********************@x40g2000prg.googlegroups. com>,
me********@aol.com <me********@aol.comwrote:
To actually answer you question, there is a known loop
cycle in 3n+85085 for which p=492 and q=264. If there is
one solution, there must be at leats 263 others (the
cyclic permutations), but to brute force search for any
others would require enumerating the answer to how many
ways can 492 marbles be put in 264 bins such that each
bin has at least 1 marble.
Thank you very much. I am awestruck.

--
David Wild using RISC OS on broadband
www.davidhwild.me.uk
Jul 29 '07 #10

P: n/a
"me********@aol.com" <me********@aol.comwrote:
>
So I never let the age of the universe intimidate me.
+1 QOTW.
--
Tim Roberts, ti**@probo.com
Providenza & Boekelheide, Inc.
Jul 30 '07 #11

This discussion thread is closed

Replies have been disabled for this discussion.