By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,886 Members | 1,107 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,886 IT Pros & Developers. It's quick & easy.

LARGE numbers

P: n/a
I've been thinking about writing a program to generate the world's
largest prime numbers, just for the fun of it. This would require being
able to hold an 8000000 digit number into memory (25 megabits, or a
little over 3 megs of memory for just one variable...) I would also
need several smaller variables. This came about as I optimised a prime
number generator more and more, until I came with the idea to try to
find the largest ever, using python. Any ideas? I'll probably try to
run this on a mainframe eventually, although they might not like it
very much... I'll run it on my home computer to first test it. Anyways,
let me know if there's a way to make python support numbers so high.
Thanks!

Nov 11 '05 #1
Share this Question
Share on Google+
10 Replies


P: n/a
For more information on how the largest prime number was found, see
www.mersenne.org.

Python does support large numbers, but it's not very fast for such
large numbers. There is a Python module called GMPY that uses the GMP
(Gnu Multiple Precision) library for faster operations on large
numbers.

Both Python and GMP use a binary format to store large numbers. Binary
format is most efficient for computation, but it is very slow to
convert huge numbers from the internal binary format to a decimal
format.

I have written a library designed specifically to operate with huge
numbers. It stores the numbers in a decimal format so conversion to
decimal format is very fast. On my not-quite-ready-to-release
development version, I can calculate, and convert to a string in
decimal format, the largest known prime (2^25964951 - 1) in less than
10 seconds. My best time so far is 6.6 seconds.

An early alpha-quality release is available at
http://home.comcast.net/~casevh/

I also have release candidate versions of GMPY for Windows on that
page.

I'll try to get the next release and some demos up in a few days.

Case

Nov 11 '05 #2

P: n/a
"Tuvas" <tu*****@gmail.com> writes:
I've been thinking about writing a program to generate the world's
largest prime numbers, just for the fun of it. This would require being
able to hold an 8000000 digit number into memory (25 megabits, or a
little over 3 megs of memory for just one variable...) I would also
need several smaller variables. This came about as I optimised a prime
number generator more and more, until I came with the idea to try to
find the largest ever, using python. Any ideas? I'll probably try to
run this on a mainframe eventually, although they might not like it
very much... I'll run it on my home computer to first test it. Anyways,
let me know if there's a way to make python support numbers so high.


Python already supports numbers that large:
math.log(10 ** 8000000) 18420680.743952367


However, you probably want to look into something like gmpy, as you'll
get better performance out of it.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Nov 11 '05 #3

P: n/a
ca****@comcast.net wrote:
An early alpha-quality release is available at
http://home.comcast.net/~casevh/

Given the module named "Decimal" in Python 2.4, I'd suggest you to rename
your library.
--
Giovanni Bajo
Nov 11 '05 #4

P: n/a
Already done for next version. Tentatively, there will be a package
called "ar" (Arbitrary Radix) and the module will be called BigInt. I'm
also working on an arbitrary radix BigFloat module.

Case

Nov 11 '05 #5

P: n/a
<ca****@comcast.net> wrote:
...
Python does support large numbers, but it's not very fast for such
large numbers. There is a Python module called GMPY that uses the GMP
(Gnu Multiple Precision) library for faster operations on large
numbers.


As the author of gmpy, I'd like to point out that the speed difference
isn't all that large, if all you're doing is ordinary arithmetic -- a
few times at most (it can be better if you need some of GMP's
functionality which gmpy exposes, such as primality testing).
Alex
Nov 11 '05 #6

P: n/a
al***@mail.comcast.net (Alex Martelli) writes:
As the author of gmpy, I'd like to point out that the speed difference
isn't all that large, if all you're doing is ordinary arithmetic -- a
few times at most (it can be better if you need some of GMP's
functionality which gmpy exposes, such as primality testing).


For numbers of this size, won't gmpy use FFT-based multiplication?
That's potentially orders of magnitude faster than ordinary n**2
multiplication.
Nov 11 '05 #7

P: n/a
Paul Rubin wrote:
al***@mail.comcast.net (Alex Martelli) writes:

For numbers of this size, won't gmpy use FFT-based multiplication?
That's potentially orders of magnitude faster than ordinary n**2
multiplication.


But Python is no slouch with its use of Karatsuba multiplication.
(in other words, Python is not N**2 for large numbers).

--Scott David Daniels
sc***********@acm.org
Nov 11 '05 #8

P: n/a

Paul Rubin wrote:
al***@mail.comcast.net (Alex Martelli) writes:
As the author of gmpy, I'd like to point out that the speed difference
isn't all that large, if all you're doing is ordinary arithmetic -- a
few times at most (it can be better if you need some of GMP's
functionality which gmpy exposes, such as primality testing).


For numbers of this size, won't gmpy use FFT-based multiplication?
That's potentially orders of magnitude faster than ordinary n**2
multiplication.


Python's native longs use Karatsuba multiplication with is O(n^1.585).

My early version of DecInt (BigDecimal) uses 4-way Toom-Cook
multiplication which is O(n^1.4). My development version uses
Nussbaumer convolution with is O(n ln(n)).

For multiplicaiton of two 1,000,000 digits numbers and conversion to a
decimal string, here are some times:

GMPY
multiplication: 0.96 seconds
conversion to string: 712.7 seconds

DecInt with GMPY
multiplication: 1.33 seconds
conversion to string: 0.83 seconds

DecInt without GMPY
multiplication: 2.84 seconds
conversion to string: 0.45 seconds

Python (using native long)
multiplication: 8.47 seconds
conversion to string: a really long time

Case

Nov 11 '05 #9

P: n/a
ca****@comcast.net writes:
Python's native longs use Karatsuba multiplication with is O(n^1.585).
My early version of DecInt (BigDecimal) uses 4-way Toom-Cook ...


Wow, cool! Thanks.
Nov 11 '05 #10

P: n/a
Well, as I'll be doing lots of multiplication, guess that GMPY is the
way to go. I'll use DecInt only for converting to strings if I find
anything interesting. This is all just kind of a theoretical aproach,
but, it can be lots of fun. Who knows if Python'll help find the
largest prime number ever? That would sure be cool. Thanks for all of
your help.

Nov 11 '05 #11

This discussion thread is closed

Replies have been disabled for this discussion.