473,326 Members | 2,110 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,326 software developers and data experts.

Number of digits in N!

Hello.

Is there a method for computing the number of digits, in a given numeric
base, of N factorial, without actually computing the factorial?

For example, 8! has 5 digits in base 10; 10! has 7 digits in base 10.
Is there a way to compute that information without actually evaluating
the factorial?
Oct 5 '06
109 7492
Phil Carmody wrote:
...
>What you are really claiming above is that there's no decimal
representation of 10! an any point in your program.

Wrong.
Wrong.

--
Best regards,
Andrey Tarasevich

Oct 9 '06 #51
Richard Bos wrote:
>>
Of course you did! "6.559763033" and "3,628,800" and "3628800" are
representations for the same number, "factorial of ten". The first is
a written representation of the number on the logarithmic scale, while
the others are different representations of the number on linear
scales. But they're both but _representations_ for the same number.

No, they're not.
Yes, they are.
3,628,800 and 3628800 are indeed representations of the
same number; but 6.559763033 is only a representation of _an
approximation_ of the logarithm of the same number. That approximation
may lead you to the same number for this case, on your computer; but the
problem is that you can't possibly be sure that this will remain true
for subsequent factorials, and in fact you can be quite confident that
it will lead to an error sooner or later.
Just like any algorithm will lead to an error sooner or later. The only
reason it is an "approximation" is that the log-based algorithm of
calculating factorial starts to suffer from rounding errors sooner than
the straightforward algorithm. In that sense, if we wanted to calculate
the factorial, the log-based algorithm would make a bad practical choice
because of it introduces rounding errors a lot sooner (as we can observe
in 10! example). But for solving that original problem the log-based
algorithm works fine. It still calculates factorial though. It does it
_poorly_, but that's still not enough to say that it _doesn't_ calculate
the factorial.

--
Best regards,
Andrey Tarasevich

Oct 9 '06 #52
Pascal Bourguignon wrote:
The question wasn't to compute the result in a computationally
efficient way, it was to do it without computing the factorial.
The point of my original question was coming from an attempt to output
Pascal's Triangle further than was possible with the naive approach (you
reach int overflow well before you reach the rows of the series where
the int cannot represent the values themselves, because the intermediate
calculations require factorials.)

The other thing driving my original question was a desire to know, given
a constraint on the rows, how many columns of text would be required to
display the entire triangle.

The ensuing thread is very interesting to me, but some of the discussion
has missed the point of the original question.
Oct 9 '06 #53

"Pascal Bourguignon" <pj*@informatimago.comwrote in message
news:87************@thalassa.informatimago.com...
"Jon Slaughter" <Jo***********@Hotmail.comwrites:
>"Pascal Bourguignon" <pj*@informatimago.comwrote in message
news:87************@thalassa.informatimago.com. ..
>>"Proginoskes" <CC*******@gmail.comwrites:
Which I did. Again, the number 3,628,800 does not appear anywhere in my
calculations. Therefore, I calculated the number 7 "without computing
the factorial".

Of course you did! "6.559763033" and "3,628,800" and "3628800" are
representations for the same number, "factorial of ten". The first is
a written representation of the number on the logarithmic scale, while
the others are different representations of the number on linear
scales. But they're both but _representations_ for the same number.

lol. then whats the point. With your definition all numbers have the same
representation.
i.e., there is only one number... why might as well call it 0.
i.e., its very easy to transform any number into another number and by
your
definition if there exists such a transform then they must be equivilent.

If you can find an isomorphism between the structures of these
transforms and the set of numbers you consider, yep.
But an isomorphism always exist. Or, atleast an isometry. y = ax + b and
even y = b are simple isomorphism in a group(and hence a ring and field)
that let you get all the other elements.

f(x) = ax + b;

hence f(x + y) = a*(x+y) + 2b = a*x + b + a*y + b = f(x) + f(y)

hence f(x) is a homeomorphism and its obvious thats its an isomorphism.
>
>your logic fails because this is not true... they are only EQUIVILENT
under
that transformation. They are not universally equivilent. Your basically
mixing apples with oranges.

I didn't speak of universal equivalence, I spoke of equivalence of the
logarithmic scale (with addition) and the linear scale (with
multiplication).

I'm not sure what your talking about any more but if you think that just
because we compute algorithmically sum(log(k),k=2..n) then it is the same as
computing log(n!) then your wrong. It can easily be proven by writing the
two algorithms out and computing there run time.

Now, if you mean they are equivilent representations of the same
mathematical entity then your right, thats the whole point. If they were not
then there would be no use us to use the quicker(algorithm). But just
because something is equivilent mathematically doesn't mean that they are
equivilent algorithmically(or other ways too).

3 = 12/4.

these are different mathematical representations of the same concept... they
are not different algorithmic representations of the same concept as they
are also not different symbolic representations of the same concept(what I
mean is that symbolically they are not the same just as they are not
algorithmically the same but they are mathematically the same).

Now one might ask why things that are mathematically equivilent not
algorithmically equivilent and thats a valid question. I think the answer
is simply that if it was the case then we would probably be asking the
opposite. (and life would be much harder as there would be no way to
simplify things).

Oct 9 '06 #54

"Pascal Bourguignon" <pj*@informatimago.comwrote in message
news:87************@thalassa.informatimago.com...
"Jon Slaughter" <Jo***********@Hotmail.comwrites:
>"Andrey Tarasevich" <an**************@hotmail.comwrote in message
news:12*************@news.supernews.com...
>>Proginoskes wrote:
...
The question wasn't to compute the result in a computationally
efficient way, it was to do it without computing the factorial.

Which I did. Again, the number 3,628,800 does not appear anywhere in my
calculations. Therefore, I calculated the number 7 "without computing
the factorial".
...

The only reason you are having this argument is that you forgot to agree
on the terminology.

You are saying that "number 3,628,800 does not appear anywhere" in your
calculations. How do you know that? May I remind you that '3,628,800' is
_not_ really a number. We call it a "number" in everyday conversations
because in most everyday contexts calling '3,628,800' a "number" is not
a big error. It is an error, but, once again, everybody understands what
we are really trying to say.

Now, in this discussion the difference becomes more important. So, what
'3,628,800' really is, if it not a "number"? '3,628,800' is a _decimal_
_representation_ of a number. Numbers are mathematical abstractions that
cannot be seen, smelt or touched. The only things about numbers that we
can see, smell or touch are particular _representations_ of numbers.

What you are really claiming above is that there's no decimal
representation of 10! an any point in your program. Fine, I'd be more
surprised it it was there. You can also safely claim that there's no
binary representation of 10! at any point in your program. That'd make
more sense, but that still misses the point. Because anyone can claim
that your sum of logarithms is noting less than an exotic representation
of 10! and, therefore, you are calculating 10! in your code. You just
choose an obscure representation for the result in order to throw the
wolves off your trail.

What you say might be true but its entirely off the point.

But that's still entirely my point.
>It doesn't matter what he calls it because the fact of the matter
is computing the factorial of a whatever is not needed when taking
the log of it. This is the whole point about logs. They convert
multiplication to addition. Factorials are multiplication and a log
of it allows you to compute the same result with
additions. Additions are much less expensive(in terms of speed and
memory) in todays processors and so one only has to assume that.

And again, the OP didn't ask to compute the result without doing
multiplications, he asked to compute the result without computing
factorial of n.
I think you are twisting things though. Because in essense I can use the
same logic you are using and say well, were not computing n! but the number
of digits of n!.

i.e. log(n!) != n! (not numerically but conceptually).

If you want to in some abstract way say we are computing n! when we compute
sum(log(k)) then you can say that but no one will understand you and they
will think you mean something else.

What we mean is that in the algorithm we did not compute n! at any step.
What you mean is that we did when you rearrange the algorithm and do other
things to it.

i.e. we mean sum(log(k)) is different algorithmically to log(n!) and you
mean they are mathematically the same. just like 10 is mathematically the
same as 2 + 4 + 3 + 1... they are not algorithmically the same as one is
easier to "compute".
Its true that in some sense we are "computing" n! when we do sum(log(k)) but
this computing does really exist because if it did we could not compute
sum(log(k)) for very large n's... the same n's that would break log(n!).

i.e., you have to realize that your wrong in some sense because we can
compute the sum for n = 10000 but not log(n!). i.e. there is a difference.
Now ask yourself what is the difference? where does it break down? obviously
there are only two spots... either the log part breaks it or the n! part.
Its not the log because then it would break in the first case too... so it
has to be the n!. but if we had n! in the sum then it would break too... it
doesn't and hence we don't have the n! in it. (and I'm dealing on the
algorithmic plane here).

Oct 9 '06 #55
Andrey wrote:
) What you are really claiming above is that there's no decimal
) representation of 10! an any point in your program. Fine, I'd be more
) surprised it it was there. You can also safely claim that there's no
) binary representation of 10! at any point in your program. That'd make
) more sense, but that still misses the point. Because anyone can claim
) that your sum of logarithms is noting less than an exotic representation
) of 10! and, therefore, you are calculating 10! in your code. You just
) choose an obscure representation for the result in order to throw the
) wolves off your trail.

This sum of lograithms doesn't need to be exact, so all it represents is an
_approximation_ of the factorial. Thus it's not calculating the factorial.
SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
Oct 9 '06 #56

<ma************@gmail.comwrote in message
news:11**********************@k70g2000cwa.googlegr oups.com...
>
Proginoskes wrote:
>Pascal Bourguignon wrote:
jmcgill <jm*****@email.arizona.eduwrites:
Is there a method for computing the number of digits, in a given
numeric
base, of N factorial, without actually computing the factorial?

For example, 8! has 5 digits in base 10; 10! has 7 digits in base 10.
Is there a way to compute that information without actually
evaluating
the factorial?

The function "number of digits" is a logarithm function, so computing
it is equivalent to compute the product.

Therefore, the answer to your question would be: NO.

Oh, really?

log 1 = 0
log 2 = 0.3010299957
log 3 = 0.4771212547
log 4 = 0.6020599914
log 5 = 0.6989700043
log 6 = 0.7781512504
log 7 = 0.8450980400
log 8 = 0.9030899871
log 9 = 0.9542425094
log 10 = 1.0

(All are accurate except for possibly to the last digit.)

The sum is 6.559763033 (last 2 digits possibly wrong, but the rest are
correct).

The answer is 7.

Where did I use the fact that 10! = 3,628,800?

--- Christopher Heckman

this post is dejavu.
what's the difference, one still has to compute log(x) somehow. it is
even easier to ocmpute factorial itself.
Yes, but try computing 1030203! and log(1030203!)(using some other method
than just tkaing the log of the first). Then ask yourself which is easier.
Sure its much easyer to compute 3!,4! directly and then take the log... but
when you start dealing with problems where n is very large you get way in
over your head very quick(not only with storage problems but time problems
too).
Oct 9 '06 #57
ma************@gmail.com wrote:
) Proginoskes wrote:
)Oh, really?
)>
)log 1 = 0
)log 2 = 0.3010299957
)log 3 = 0.4771212547
)log 4 = 0.6020599914
)log 5 = 0.6989700043
)log 6 = 0.7781512504
)log 7 = 0.8450980400
)log 8 = 0.9030899871
)log 9 = 0.9542425094
)log 10 = 1.0
)>
)(All are accurate except for possibly to the last digit.)
)>
)The sum is 6.559763033 (last 2 digits possibly wrong, but the rest are
)correct).
)>
)The answer is 7.
)>
)Where did I use the fact that 10! = 3,628,800?
)
) this post is dejavu.
) what's the difference, one still has to compute log(x) somehow. it is
) even easier to ocmpute factorial itself.

Let's change that calculation slightly then.

log 1 ~= 0.0
log 2 ~= 0.3
log 3 ~= 0.5
log 4 ~= 0.6
log 5 ~= 0.7
log 6 ~= 0.8
log 7 ~= 0.8
log 8 ~= 0.9
log 9 ~= 1.0
log 10 ~= 1.0

The sum is 6.6, so the answer is 7.
SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
Oct 9 '06 #58
"Jon Slaughter" <Jo***********@Hotmail.comwrites:
"Pascal Bourguignon" <pj*@informatimago.comwrote in message
news:87************@thalassa.informatimago.com...
>And again, the OP didn't ask to compute the result without doing
multiplications, he asked to compute the result without computing
factorial of n.

I think you are twisting things though. Because in essense I can use the
same logic you are using and say well, were not computing n! but the number
of digits of n!.

i.e. log(n!) != n! (not numerically but conceptually).

If you want to in some abstract way say we are computing n! when we compute
sum(log(k)) then you can say that but no one will understand you and they
will think you mean something else.

What we mean is that in the algorithm we did not compute n! at any step.
What you mean is that we did when you rearrange the algorithm and do other
things to it.

i.e. we mean sum(log(k)) is different algorithmically to log(n!) and you
mean they are mathematically the same. just like 10 is mathematically the
same as 2 + 4 + 3 + 1... they are not algorithmically the same as one is
easier to "compute".
Its true that in some sense we are "computing" n! when we do sum(log(k)) but
this computing does really exist because if it did we could not compute
sum(log(k)) for very large n's... the same n's that would break log(n!).
Agreed. My point is that the way the problem was stated, I don't
consider a solution summing the logarithms to be valid, while a
solution using Stirling approximation is.

i.e., you have to realize that your wrong in some sense because we can
compute the sum for n = 10000 but not log(n!). i.e. there is a difference.
I'll spare you the 35 kbyte dump, but my computer has no problem in
computing 10000! and counting the number of digits needed to
represent it in base 10:

LISP(time (length (prin1-to-string (ext:! 10000))))
Real time: 0.304857 sec.
Run time: 0.296018 sec.
Space: 617096 Bytes
35660

--
__Pascal_Bourguignon__ _ Software patents are endangering
() ASCII ribbon against html email (o_ the computer industry all around
/\ 1962:DO20I=1.100 //\ the world http://lpf.ai.mit.edu/
2001:my($f)=`fortune`; V_/ http://petition.eurolinux.org/
Oct 9 '06 #59
Andrey Tarasevich <an**************@hotmail.comwrites:
Richard Bos wrote:
>>>
Of course you did! "6.559763033" and "3,628,800" and "3628800" are
representations for the same number, "factorial of ten". The first is
a written representation of the number on the logarithmic scale, while
the others are different representations of the number on linear
scales. But they're both but _representations_ for the same number.

No, they're not.

Yes, they are.
[...]

If "6.559763033" is a representation of 3628800, then how would you
represent 6.559763033?

"10**6.559763033" is an approximate representation of 3628800.
"6.559763033" is merely a representation of 6.559763033.

Note that "10!" is also a representation of 3628800, which saves you a
lot of work if you don't mind not knowing the actual answer.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Oct 9 '06 #60
Willem <wi****@stack.nlwrites:
ma************@gmail.com wrote:
) Proginoskes wrote:
)Oh, really?
)>
)log 1 = 0
)log 2 = 0.3010299957
)log 3 = 0.4771212547
)log 4 = 0.6020599914
)log 5 = 0.6989700043
)log 6 = 0.7781512504
)log 7 = 0.8450980400
)log 8 = 0.9030899871
)log 9 = 0.9542425094
)log 10 = 1.0
)>
)(All are accurate except for possibly to the last digit.)
)>
)The sum is 6.559763033 (last 2 digits possibly wrong, but the rest are
)correct).
)>
)The answer is 7.
)>
)Where did I use the fact that 10! = 3,628,800?
)
) this post is dejavu.
) what's the difference, one still has to compute log(x) somehow. it is
) even easier to ocmpute factorial itself.

Let's change that calculation slightly then.

log 1 ~= 0.0
log 2 ~= 0.3
log 3 ~= 0.5
log 4 ~= 0.6
log 5 ~= 0.7
log 6 ~= 0.8
log 7 ~= 0.8
log 8 ~= 0.9
log 9 ~= 1.0
log 10 ~= 1.0

The sum is 6.6, so the answer is 7.
That sounds more convincing.

The more so if the way you compute these approximations of log doesn't
involves all the algorithmic complexity of computing the log function...

After all, computing factorial n only involves n multiplications,
while computing only log(i) involves several additions,
multiplications and divisions.
--
__Pascal_Bourguignon__ _ Software patents are endangering
() ASCII ribbon against html email (o_ the computer industry all around
/\ 1962:DO20I=1.100 //\ the world http://lpf.ai.mit.edu/
2001:my($f)=`fortune`; V_/ http://petition.eurolinux.org/
Oct 9 '06 #61

Proginoskes wrote:
Jon Slaughter wrote:
"Pascal Bourguignon" <pj*@informatimago.comwrote in message
news:87************@thalassa.informatimago.com...
"Proginoskes" <CC*******@gmail.comwrites:
>Which I did. Again, the number 3,628,800 does not appear anywhere in my
>calculations. Therefore, I calculated the number 7 "without computing
>the factorial".
>
Of course you did! "6.559763033" and "3,628,800" and "3628800" are
representations for the same number, "factorial of ten". The first is
a written representation of the number on the logarithmic scale, while
the others are different representations of the number on linear
scales. But they're both but _representations_ for the same number.
>
lol. then whats the point. With your definition all numbers have the same
representation.
i.e., there is only one number... why might as well call it 0.

i.e., its very easy to transform any number into another number and by your
definition if there exists such a transform then they must be equivilent.

your logic fails because this is not true... they are only EQUIVILENT under
that transformation. They are not universally equivilent. Your basically
mixing apples with oranges.

Yes, Pascal Bourguignon is only making himself look sillier and
sillier.
If anyone cares, it can be shown that, if Reverse Polish Notation is
used for the method of calcualtion, that 10! never shows up in the
stack of numbers.

RPN means you have a stack of numbers; you can push a number on top, or
pull the top one off. You are also allowed some functions f, which pull
off the top number x, and push f(x) down on top of the stack. You can
do the same for operations.

For instance, 2*(3+6) would be coded like: 3 6 + 2 *

Step by step, this would give the following sequence of stacks (top =
left):

3
6 3
9 [here the top 2 numbers are added]
2 9
18 [here the top 2 numbers were multiplied]

Pascal seems to have given up this debate, and has moved on to
computational complexity.

--- Christopher Heckman

Oct 9 '06 #62

Pascal Bourguignon wrote:
Willem <wi****@stack.nlwrites:
ma************@gmail.com wrote:
) Proginoskes wrote:
)Oh, really?
)>
)log 1 = 0
)log 2 = 0.3010299957
)log 3 = 0.4771212547
)log 4 = 0.6020599914
)log 5 = 0.6989700043
)log 6 = 0.7781512504
)log 7 = 0.8450980400
)log 8 = 0.9030899871
)log 9 = 0.9542425094
)log 10 = 1.0
)>
)(All are accurate except for possibly to the last digit.)
)>
)The sum is 6.559763033 (last 2 digits possibly wrong, but the rest are
)correct).
)>
)The answer is 7.
)>
)Where did I use the fact that 10! = 3,628,800?
)
) this post is dejavu.
) what's the difference, one still has to compute log(x) somehow. it is
) even easier to ocmpute factorial itself.

Let's change that calculation slightly then.

log 1 ~= 0.0
log 2 ~= 0.3
log 3 ~= 0.5
log 4 ~= 0.6
log 5 ~= 0.7
log 6 ~= 0.8
log 7 ~= 0.8
log 8 ~= 0.9
log 9 ~= 1.0
log 10 ~= 1.0

The sum is 6.6, so the answer is 7.

That sounds more convincing.

The more so if the way you compute these approximations of log doesn't
involves all the algorithmic complexity of computing the log function...

After all, computing factorial n only involves n multiplications,
True, but the stored numbers grow in size while you're doing it.
Multiplication of two integers cannot be done in constant time;
however, it can be done in quadratic time (in terms of the sum of the
size of the integers).
while computing only log(i) involves several additions,
multiplications and divisions.
Of smaller numbers.

Pascal is comparing apples and oranges again.

--- Christopher Heckman

Oct 9 '06 #63

"Pascal Bourguignon" <pj*@informatimago.comwrote in message
news:87************@thalassa.informatimago.com...
"Jon Slaughter" <Jo***********@Hotmail.comwrites:
>"Pascal Bourguignon" <pj*@informatimago.comwrote in message
news:87************@thalassa.informatimago.com. ..
>>And again, the OP didn't ask to compute the result without doing
multiplications, he asked to compute the result without computing
factorial of n.

I think you are twisting things though. Because in essense I can use the
same logic you are using and say well, were not computing n! but the
number
of digits of n!.

i.e. log(n!) != n! (not numerically but conceptually).

If you want to in some abstract way say we are computing n! when we
compute
sum(log(k)) then you can say that but no one will understand you and they
will think you mean something else.

What we mean is that in the algorithm we did not compute n! at any step.
What you mean is that we did when you rearrange the algorithm and do
other
things to it.

i.e. we mean sum(log(k)) is different algorithmically to log(n!) and you
mean they are mathematically the same. just like 10 is mathematically the
same as 2 + 4 + 3 + 1... they are not algorithmically the same as one is
easier to "compute".
Its true that in some sense we are "computing" n! when we do sum(log(k))
but
this computing does really exist because if it did we could not compute
sum(log(k)) for very large n's... the same n's that would break log(n!).

Agreed. My point is that the way the problem was stated, I don't
consider a solution summing the logarithms to be valid, while a
solution using Stirling approximation is.
Why not? This is what I don't understand? Is it not a solution to the
problem in the sense that the results are the same? Is it also not a faster
and more robust algorithm? I agree that the stirling solution might be
better(but is is really an approximation so its not perfect. It all depends
on the criteria. The direct method of first computing n! is not a good one
IMO except in rare cases.
>
>i.e., you have to realize that your wrong in some sense because we can
compute the sum for n = 10000 but not log(n!). i.e. there is a
difference.

I'll spare you the 35 kbyte dump, but my computer has no problem in
computing 10000! and counting the number of digits needed to
represent it in base 10:

LISP(time (length (prin1-to-string (ext:! 10000))))
Real time: 0.304857 sec.
Run time: 0.296018 sec.
Space: 617096 Bytes
35660
Ok, now do the same with the sum method... now use n=100000000; If this is
possible then I think your algorithm is mistaken.

why? Becaus the number of digits is ceil(sum(log(k),k=2..100000000))

It takes a few seconds to compute the sum and the result is 756,570,557. If
you represent a digit as a byte then thats 756MB of memory. Thats a lot of
memory to compute using your method... that doesn't include the time it
takes to multiply that all out. Even though memory is cheap all you have to
do is add another factor of 10 to see your method is not scalable. Ofcoures
maybe you plan on using it only for small n. Even then I wouldn't use that
method since its slow unless I also needed to compute n! for some other
reason. It comes to about 8gigs.

As you can see, for n ~= 10^m there are ~ 10^m digits. Adding an additional
factor of 10 to n adds an additional factor of 10 to the number of digits.
This is called exponential growth and is not scalable on polynomial growth
computing(which is all we have).

Oct 9 '06 #64
Andrey Tarasevich wrote:
Richard Bos wrote:
>>Of course you did! "6.559763033" and "3,628,800" and "3628800" are
representations for the same number, "factorial of ten". The first is
a written representation of the number on the logarithmic scale, while
the others are different representations of the number on linear
scales. But they're both but _representations_ for the same number.
>No, they're not.
Yes, they are.
Mathematically, they can be.

Computationally, they are not.

Logarithms map certain rational numbers onto irrational numbers (and
vice versa), right?

Most useful models of computation assume you are making transformations
from finite state to finite state[1]. If there are two different
representations for one number and one requires an irrational number
(infinite state) and another requires a rational number (finite state),
they are not equivalent representations. Mainly because one of them
cannot exist and the other can. :-)

- Logan

[1] It has been a while since I took a class on the theory of
computation, but unless I'm mistaken, it would be correct to
say that even though the size of the state might be unbounded,
it is, at any given moment, finite. Any given computation
(of finite length, I guess[2]) maps finite state onto finite
state.

[2] Which is probably the only useful form of computation, making
the usual assumption that the goal of computation is to get to
the end state.
Oct 10 '06 #65
Andrey Tarasevich wrote:
Richard Bos wrote:
>3,628,800 and 3628800 are indeed representations of the
same number; but 6.559763033 is only a representation of _an
approximation_ of the logarithm of the same number. That approximation
may lead you to the same number for this case, on your computer; but the
problem is that you can't possibly be sure that this will remain true
for subsequent factorials, and in fact you can be quite confident that
it will lead to an error sooner or later.
Just like any algorithm will lead to an error sooner or later.
No. Some algorithms lose information as they perform transformations
on the state of the (hypothetical) machine. Others do not.
The only
reason it is an "approximation" is that the log-based algorithm of
calculating factorial starts to suffer from rounding errors sooner than
the straightforward algorithm.
No, the straightforward algorithm involves only rational numbers (which
can be represented with a finite amount of information) and thus never
needs to suffer from rounding errors. The log-based one must suffer from
rounding errors since it involves irrational numbers (which require an
infinite amount of information to be represented directly).

- Logan
Oct 10 '06 #66
Andrey Tarasevich <an**************@hotmail.comwrites:
Phil Carmody wrote:
...
What you are really claiming above is that there's no decimal
representation of 10! an any point in your program.
Wrong.

Wrong.
Show the world either:
1) Where Christopher makes any reference to decimal representations of 10!
2) Your complete ignorance, and disregard for correctness.

Clue - (2) will be much easier for you, you're 99% of the way down
the path already.

Phil
--
"Home taping is killing big business profits. We left this side blank
so you can help." -- Dead Kennedys, written upon the B-side of tapes of
/In God We Trust, Inc./.
Oct 10 '06 #67

Logan Shaw wrote:
Andrey Tarasevich wrote:
Richard Bos wrote:
3,628,800 and 3628800 are indeed representations of the
same number; but 6.559763033 is only a representation of _an
approximation_ of the logarithm of the same number. That approximation
may lead you to the same number for this case, on your computer; but the
problem is that you can't possibly be sure that this will remain true
for subsequent factorials, and in fact you can be quite confident that
it will lead to an error sooner or later.
Just like any algorithm will lead to an error sooner or later.

No. Some algorithms lose information as they perform transformations
on the state of the (hypothetical) machine. Others do not.
The only
reason it is an "approximation" is that the log-based algorithm of
calculating factorial starts to suffer from rounding errors sooner than
the straightforward algorithm.

No, the straightforward algorithm involves only rational numbers (which
can be represented with a finite amount of information) and thus never
needs to suffer from rounding errors. The log-based one must suffer from
rounding errors since it involves irrational numbers (which require an
infinite amount of information to be represented directly).
I was going to say: "But a certain amount of error is okay; if each log
is correct to within 1/(20n), then the original problem [find the
number of digits in N!] can be solved exactly. (The error of the sum is
at most 1/20 from the actual value.)" However, if the sum of the logs
turns out to be very close to an integer, more precision may be needed.
This raises an interesting question (of which I do not know the
answer): Given any
epsilon 0, are there an infinite number of N's such that

{log(N!)} 1 - epsilon ?

(Here, {x} denotes the fractional part of x, so {1.98} = 0.98.) If not,
then there is a fixed precision ((1-M)/(2*n), where M is the largest
value of {log(N!)}) which can be used for solving the "digits in N!
problem" for any N. (Basically, you calcluate the value of log(i) to
within (1-M)/(2*n).)

And logs are base 10, of course.

--- Christopher Heckman

Oct 10 '06 #68
Pascal Bourguignon <pj*@informatimago.comwrites:
Phil Carmody <th*****************@yahoo.co.ukwrites:
Pascal Bourguignon <pj*@informatimago.comwrites:
"Proginoskes" <CC*******@gmail.comwrites:
Which I did. Again, the number 3,628,800 does not appear anywhere in my
calculations. Therefore, I calculated the number 7 "without computing
the factorial".

Of course you did! "6.559763033" and "3,628,800" and "3628800" are
representations for the same number, "factorial of ten". The first is
a written representation of the number on the logarithmic scale, while
the others are different representations of the number on linear
scales. But they're both but _representations_ for the same number.
You appear to think, if your brain maintains any sort of consistency,
that the following algorithm

double foo(double x) { return x; }

calculates the cubes and fifth powers. And seventh powers.
And in fact any function for which there's a R<->R bijection.

I've spoke of isomorphism between fields, rings or groups, not just a
bijection.
Are you so stupid that you don't realise that *every* bijection
induces an isomorphism? Therefore your isomorphism really is
dime-an-aleph-null.

Phil
--
"Home taping is killing big business profits. We left this side blank
so you can help." -- Dead Kennedys, written upon the B-side of tapes of
/In God We Trust, Inc./.
Oct 10 '06 #69

Willem wrote:
ma************@gmail.com wrote:
) Proginoskes wrote:
)Oh, really?
)>
)log 1 = 0
)log 2 = 0.3010299957
)log 3 = 0.4771212547
)log 4 = 0.6020599914
)log 5 = 0.6989700043
)log 6 = 0.7781512504
)log 7 = 0.8450980400
)log 8 = 0.9030899871
)log 9 = 0.9542425094
)log 10 = 1.0
)>
)(All are accurate except for possibly to the last digit.)
)>
)The sum is 6.559763033 (last 2 digits possibly wrong, but the rest are
)correct).
)>
)The answer is 7.
)>
)Where did I use the fact that 10! = 3,628,800?
)
) this post is dejavu.
) what's the difference, one still has to compute log(x) somehow. it is
) even easier to ocmpute factorial itself.

Let's change that calculation slightly then.

log 1 ~= 0.0
log 2 ~= 0.3
log 3 ~= 0.5
log 4 ~= 0.6
log 5 ~= 0.7
log 6 ~= 0.8
log 7 ~= 0.8
log 8 ~= 0.9
log 9 ~= 1.0
log 10 ~= 1.0

The sum is 6.6, so the answer is 7.
SaSW, Willem
that looks fine but think how many multiplication it requires per
log(X) in order for logs sum to have absolute error no more than 1?
obviously you will need increasing number of multiplications per
log(X), and thus it is worse than doing single multiplication per X in
explicit factorial computation.

Oct 10 '06 #70
"jmcgill" <jm*****@email.arizona.eduwrote in message
news:81vVg.848$rS.773@fed1read05...
Stuart wrote:
>As the number of digits grows quite quickly, such "formatting" for the
latter rows is going to give very long rows. It will also make early
rows,
containing 1-2 digit numbers look quite barren.

Of course. It was just an experiment, as I found the exercise of
"creating the triangle" far too easy, and then found that of "formatting
in a single pass" to be beyond my understanding of numbers. I had no
intention of actually formatting a 6144-row Pascal's Triangle for
printing on 100-meter-wide paper :-)
And look at the trouble such a simple enquiring attitude can cause - I hope
you have learned your lesson ;-)

[Just in case you are relatively new to newsgroups - don't be put off by the
flamewars and try not to get sucked in!
Enjoy your programming!]

Regards
--
Stuart
Oct 10 '06 #71
In article <11**********************@b28g2000cwb.googlegroups .com"Proginoskes" <CC*******@gmail.comwrites:
....
This raises an interesting question (of which I do not know the
answer): Given any
epsilon 0, are there an infinite number of N's such that

{log(N!)} 1 - epsilon ?

(Here, {x} denotes the fractional part of x, so {1.98} = 0.98.) If not,
then there is a fixed precision ((1-M)/(2*n), where M is the largest
value of {log(N!)}) which can be used for solving the "digits in N!
problem" for any N. (Basically, you calcluate the value of log(i) to
within (1-M)/(2*n).)

And logs are base 10, of course.
I think that can not happen very often. It can only be true if N! is close
to a power of the base of the logarithm.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Oct 10 '06 #72
Proginoskes wrote:
This raises an interesting question (of which I do not know the
answer): Given any
epsilon 0, are there an infinite number of N's such that

{log(N!)} 1 - epsilon ?

(Here, {x} denotes the fractional part of x, so {1.98} = 0.98.) If
not, then there is a fixed precision ((1-M)/(2*n), where M is the
largest value of {log(N!)}) which can be used for solving the "digits
in N! problem" for any N. (Basically, you calcluate the value of
log(i) to within (1-M)/(2*n).)
I can't prove it either. But there are infinitely many distinct values
of {log(N!)} in (0, 1) so they must be dense around at least one point
in that range -- why not around 1.0 ?

(For what little it's worth, I plotted the first few million values,
and they appear to be scattered "randomly" over that interval, so
there's at least a hint that they /might/ be dense over all of (0, 1)
-- in which case you would have your result.)

-- chris
Oct 10 '06 #73
ma************@gmail.com wrote:
) that looks fine but think how many multiplication it requires per
) log(X) in order for logs sum to have absolute error no more than 1?
) obviously you will need increasing number of multiplications per
) log(X), and thus it is worse than doing single multiplication per X in
) explicit factorial computation.

You are assuming that multiplication has a fixed cost, which it hasn't.
SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
Oct 10 '06 #74
"Chris Uppal" <ch*********@metagnostic.REMOVE-THIS.orgwrites:
Proginoskes wrote:
This raises an interesting question (of which I do not know the
answer): Given any
epsilon 0, are there an infinite number of N's such that

{log(N!)} 1 - epsilon ?

(Here, {x} denotes the fractional part of x, so {1.98} = 0.98.)

I can't prove it either. But there are infinitely many distinct values
of {log(N!)} in (0, 1) so they must be dense around at least one point
in that range -- why not around 1.0 ?
Repeat that argument with the cantor set.

Phil
--
"Home taping is killing big business profits. We left this side blank
so you can help." -- Dead Kennedys, written upon the B-side of tapes of
/In God We Trust, Inc./.
Oct 10 '06 #75

Phil Carmody wrote:
"Chris Uppal" <ch*********@metagnostic.REMOVE-THIS.orgwrites:
Proginoskes wrote:
This raises an interesting question (of which I do not know the
answer): Given any
epsilon 0, are there an infinite number of N's such that
>
{log(N!)} 1 - epsilon ?
>
(Here, {x} denotes the fractional part of x, so {1.98} = 0.98.)
I can't prove it either. But there are infinitely many distinct values
of {log(N!)} in (0, 1) so they must be dense around at least one point
in that range -- why not around 1.0 ?

Repeat that argument with the cantor set.
The cantor set is dense at 1, isn't it?

The limit of 2/3, 2/3 + 2/3^2, 2/3 + 2/3^2 + 2/3^3, ... is 1, and all
of these fractions are in the Cantor set.

--- Christopher Heckman

Oct 11 '06 #76

ma************@gmail.com wrote:
Willem wrote:
ma************@gmail.com wrote:
) Proginoskes wrote:
)Oh, really?
)>
)log 1 = 0
)log 2 = 0.3010299957
)log 3 = 0.4771212547
)log 4 = 0.6020599914
)log 5 = 0.6989700043
)log 6 = 0.7781512504
)log 7 = 0.8450980400
)log 8 = 0.9030899871
)log 9 = 0.9542425094
)log 10 = 1.0
)>
)(All are accurate except for possibly to the last digit.)
)>
)The sum is 6.559763033 (last 2 digits possibly wrong, but the rest are
)correct).
)>
)The answer is 7.
)>
)Where did I use the fact that 10! = 3,628,800?
)
) this post is dejavu.
) what's the difference, one still has to compute log(x) somehow. it is
) even easier to ocmpute factorial itself.

Let's change that calculation slightly then.

log 1 ~= 0.0
log 2 ~= 0.3
log 3 ~= 0.5
log 4 ~= 0.6
log 5 ~= 0.7
log 6 ~= 0.8
log 7 ~= 0.8
log 8 ~= 0.9
log 9 ~= 1.0
log 10 ~= 1.0

The sum is 6.6, so the answer is 7.
SaSW, Willem

that looks fine but think how many multiplication it requires per
log(X) in order for logs sum to have absolute error no more than 1?
Actually, it's more complicated than that. If the absolute error is <
1, and the sum turns out to be 0.8, the answer to the OP still is
unknown; it could just as easily be 2.

No absolute error is a guarantee that the correct answer will be given,
because you have to round up to the nearest integer, and the ceiling
function is discontinuous at all integers.

That's why I wanted to know whether the fractional part of log(N!) can
be bounded away from 0 and 1.

--- Christopher Heckman
obviously you will need increasing number of multiplications per
log(X), and thus it is worse than doing single multiplication per X in
explicit factorial computation.
Oct 11 '06 #77
Phil Carmody said:
Repeat that argument with the cantor set.
Repea at ar he ca set.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Oct 11 '06 #78
"Proginoskes" <CC*******@gmail.comwrites:
Phil Carmody wrote:
"Chris Uppal" <ch*********@metagnostic.REMOVE-THIS.orgwrites:
Proginoskes wrote:
>
This raises an interesting question (of which I do not know the
answer): Given any
epsilon 0, are there an infinite number of N's such that

{log(N!)} 1 - epsilon ?

(Here, {x} denotes the fractional part of x, so {1.98} = 0.98.)
>
I can't prove it either. But there are infinitely many distinct values
of {log(N!)} in (0, 1) so they must be dense around at least one point
in that range -- why not around 1.0 ?
Repeat that argument with the cantor set.

The cantor set is dense at 1, isn't it?
I'd need to review 20 year old lecture notes to find the definition of
dense to answer that.
The limit of 2/3, 2/3 + 2/3^2, 2/3 + 2/3^2 + 2/3^3, ... is 1, and all
of these fractions are in the Cantor set.
I accept that in the cantor set 1, and 0, and everything else, is an
accumulation point. I do not remember how that relates to being dense
or not though. If density requires simply one other point in every
open region about a point, then yes, it's dense.

I seem to remember reading that a positive measure fat cantor set (remove
1*1/4, 2*1/16, 4*1/64, etc...) is nowhere dense, and would have expected
it to be more dense than the cantor set.

I await enlightenment...

Phil
--
"Home taping is killing big business profits. We left this side blank
so you can help." -- Dead Kennedys, written upon the B-side of tapes of
/In God We Trust, Inc./.
Oct 11 '06 #79

Phil Carmody wrote:
"Proginoskes" <CC*******@gmail.comwrites:
Phil Carmody wrote:
"Chris Uppal" <ch*********@metagnostic.REMOVE-THIS.orgwrites:
Proginoskes wrote:

This raises an interesting question (of which I do not know the
answer): Given any
epsilon 0, are there an infinite number of N's such that
>
{log(N!)} 1 - epsilon ?
>
(Here, {x} denotes the fractional part of x, so {1.98} = 0.98.)

I can't prove it either. But there are infinitely many distinct values
of {log(N!)} in (0, 1) so they must be dense around at least one point
in that range -- why not around 1.0 ?
>
Repeat that argument with the cantor set.
The cantor set is dense at 1, isn't it?

I'd need to review 20 year old lecture notes to find the definition of
dense to answer that.
The limit of 2/3, 2/3 + 2/3^2, 2/3 + 2/3^2 + 2/3^3, ... is 1, and all
of these fractions are in the Cantor set.

I accept that in the cantor set 1, and 0, and everything else, is an
accumulation point. I do not remember how that relates to being dense
or not though. If density requires simply one other point in every
open region about a point, then yes, it's dense.
That's the definition I was going by.

--- Christopher Heckman
I seem to remember reading that a positive measure fat cantor set (remove
1*1/4, 2*1/16, 4*1/64, etc...) is nowhere dense, and would have expected
it to be more dense than the cantor set.

I await enlightenment...

Phil
--
"Home taping is killing big business profits. We left this side blank
so you can help." -- Dead Kennedys, written upon the B-side of tapes of
/In God We Trust, Inc./.
Oct 11 '06 #80
Phil Carmody wrote:

[me:]
I can't prove it either. But there are infinitely many distinct
values of {log(N!)} in (0, 1) so they must be dense around at least
one point in that range -- why not around 1.0 ?

Repeat that argument with the cantor set.
Interesting, thanks.

Maybe I was misusing the term "dense" (It's been decades since I last
had to use it). All I meant was that for every value, x, in (0, 1),
and for every epsilon, there is at least one point in the set which was
within epsilon of x. And analogously for "dense around a point".

This is the first time I've heard of Cantor's set (unless I've
forgotten it, of course); but it looks at first glance as if it is
"dense" (in my sense, whether or not that is correct terminology)
around 0, 1/3. 2/3, and 1 (and so on fractally). Is that wrong ?

-- chris
Oct 11 '06 #81

Chris Uppal wrote:
Phil Carmody wrote:

[me:]
I can't prove it either. But there are infinitely many distinct
values of {log(N!)} in (0, 1) so they must be dense around at least
one point in that range -- why not around 1.0 ?
Repeat that argument with the cantor set.

Interesting, thanks.

Maybe I was misusing the term "dense" (It's been decades since I last
had to use it). All I meant was that for every value, x, in (0, 1),
and for every epsilon, there is at least one point in the set which was
within epsilon of x. And analogously for "dense around a point".

This is the first time I've heard of Cantor's set (unless I've
forgotten it, of course); but it looks at first glance as if it is
"dense" (in my sense, whether or not that is correct terminology)
A set S being "dense at a point x" is usually phrased as "x is an
accumulation point of [a sequence of elements of] S" or "x is a limit
point of S".

--- Christopher Heckman
around 0, 1/3. 2/3, and 1 (and so on fractally). Is that wrong ?
Oct 11 '06 #82
Proginoskes <CC*******@gmail.comwrote:
>
This raises an interesting question (of which I do not know the
answer): Given any
epsilon 0, are there an infinite number of N's such that

{log(N!)} 1 - epsilon ?
Consider log(N!) for N = 10^n + x, x <= 3*10^(n/2). The successive values
(mod 1) are increasing, differ by O(10^(n/2)) and span an interval 1.
Hence the fractional parts of log(N!) are dense in [0, 1].
Mike Guy
Oct 11 '06 #83
I wrote:
>Proginoskes <CC*******@gmail.comwrote:
>>
This raises an interesting question (of which I do not know the
answer): Given any
epsilon 0, are there an infinite number of N's such that

{log(N!)} 1 - epsilon ?

Consider log(N!) for N = 10^n + x, x <= 3*10^(n/2). The successive values
(mod 1) are increasing, differ by O(10^(n/2)) and span an interval 1.
^^^^^^^^
That should be 10^(_n/2) of course.
>Hence the fractional parts of log(N!) are dense in [0, 1].

Mike Guy
Oct 11 '06 #84
Richard Heathfield wrote:
Phil Carmody said:
>Repeat that argument with the cantor set.

Repea at ar he ca set.
ROFL (or at least the British equivalent).

Allin Cottrell
Wake Forest University
Oct 12 '06 #85
Keith Thompson wrote:
Andrey Tarasevich <an**************@hotmail.comwrites:
>Richard Bos wrote:
>>>>
Of course you did! "6.559763033" and "3,628,800" and "3628800" are
representations for the same number, "factorial of ten". The first is
a written representation of the number on the logarithmic scale, while
the others are different representations of the number on linear
scales. But they're both but _representations_ for the same number.

No, they're not.

Yes, they are.
[...]

If "6.559763033" is a representation of 3628800, then how would you
represent 6.559763033?

"10**6.559763033" is an approximate representation of 3628800.
"6.559763033" is merely a representation of 6.559763033.
Neither of this question/statements make sense without specifying which
concrete representation you are talking about. There's an infinite
number of different ones.

Even a mere "6.559763033" will represent completely different numbers
when interpreted as, say, either decimal or hexadecimal representation.
Note that "10!" is also a representation of 3628800, which saves you a
lot of work if you don't mind not knowing the actual answer.
That's true. Computer programs don't create information. They can only
transform it from one representation to the other. The whole point of
this is to produce the final result in the representation that makes it
easy to perceive (or consume, if you will). A program that is supposed
to calculate factorial, but outputs the result of 10! as "10!" is
absolutely correct from the purely formal point of view. Yet it is
completely useless for obvious reasons.

Another important point is that in order to be useful a program should
produce its result in a convenient representation that is either

1) fixed and pre-defined, or
2) non-fixed, but the concrete representation can always be easily
derived from the program's output (easily = the complexity of such
derivation is [significantly] lower that the complexity of obtaining the
same result in some "conventional" representation).

Someone else here argued jokingly that a program that always produces 0
as its result can be thought of as solving every numerical problem in
the world (for some reason 42 feels like a better choice though) and one
just needs to find the "correct" representation. Technically, this
statement is more serious than it seems at the first sight. Yes this
program _does_ solve every numerical problem in the world. Yet this
program is useless simply because it violates the above requirement -
the complexity of finding the correct representation is equivalent to
the complexity of the problem itself.

In our original case the log-based algorithm calculates 10! in a
well-known fixed pre-defined representation. The representation itself
feels a little weird for an unprepared mind stuck on decimals or other
positional formats (which another reason why it ignited such a
discussion), but this does not really change anything.

--
Best regards,
Andrey Tarasevich

Oct 12 '06 #86
Logan Shaw wrote:
Andrey Tarasevich wrote:
...
>The only
reason it is an "approximation" is that the log-based algorithm of
calculating factorial starts to suffer from rounding errors sooner than
the straightforward algorithm.

No, the straightforward algorithm involves only rational numbers (which
can be represented with a finite amount of information) and thus never
needs to suffer from rounding errors. The log-based one must suffer from
rounding errors since it involves irrational numbers (which require an
infinite amount of information to be represented directly).
...
No, that's not what I'm talking about. What I'm trying to say is that
assuming that we use, say, 4-byte numbers to represent intermediate
results in the program, either method will start to loose precision
sooner or later.

--
Best regards,
Andrey Tarasevich

Oct 12 '06 #87

Andrey Tarasevich wrote:
Keith Thompson wrote:
Andrey Tarasevich <an**************@hotmail.comwrites:
Richard Bos wrote:

Of course you did! "6.559763033" and "3,628,800" and "3628800" are
representations for the same number, "factorial of ten". The first is
a written representation of the number on the logarithmic scale, while
the others are different representations of the number on linear
scales. But they're both but _representations_ for the same number.

No, they're not.

Yes, they are.
[...]

If "6.559763033" is a representation of 3628800, then how would you
represent 6.559763033?

"10**6.559763033" is an approximate representation of 3628800.
"6.559763033" is merely a representation of 6.559763033.

Neither of this question/statements make sense without specifying which
concrete representation you are talking about. There's an infinite
number of different ones.

Even a mere "6.559763033" will represent completely different numbers
when interpreted as, say, either decimal or hexadecimal representation.
Random off-topic observation:

The number 21963283741 is the only positive integer represented by the
same set of bits on the PDP-6/10 as a float as for an integer. If
32-bit numbers (in IEEE format) are used instead, the only number
represented the same way in both representations is 0. (Source: HAKMEM
#174; see http://www.inwap.com/pdp10/hbaker/hakmem/hakmem.html for a
HTML version. There's lots of neat tricks & trivia in the HAKMEM
document, by the way.)

--- Christopher Heckman

Oct 12 '06 #88
M.J.T. Guy wrote:
Consider log(N!) for N = 10^n + x, x <= 3*10^(n/2). The
Could you explain what you mean by "10^n + x, x <= ...", please. Is it
sci-math-speak for "10^N + x where x is ..." ?

(mod 1) are increasing, differ by O(10^(n/2)) and span an interval >
1. ^^^^^^^^
That should be 10^(_n/2) of course.
And what does the _ signify here ?

-- chris
Oct 12 '06 #89

Andrey Tarasevich wrote:
That's true. Computer programs don't create information. They can only
transform it from one representation to the other. >
If computer programs don't create information, why then, there exists
simulations?
It is not scientific theories equivalent to computer programs?
The Chaos Theory contradicts that proposition because a system of
non-lineal
differential equations produces results utterly unpredictibles.
Ludovicus

Oct 12 '06 #90
Andrey Tarasevich wrote:
Logan Shaw wrote:
>Andrey Tarasevich wrote:
>>The only
reason it is an "approximation" is that the log-based algorithm of
calculating factorial starts to suffer from rounding errors sooner than
the straightforward algorithm.
>No, the straightforward algorithm involves only rational numbers (which
can be represented with a finite amount of information) and thus never
needs to suffer from rounding errors. The log-based one must suffer from
rounding errors since it involves irrational numbers (which require an
infinite amount of information to be represented directly).
No, that's not what I'm talking about. What I'm trying to say is that
assuming that we use, say, 4-byte numbers to represent intermediate
results in the program, either method will start to loose precision
sooner or later.
We could assume 4-byte integers, but that would be an artificial
restriction. Virtually every major computer language has support
for integers of unbounded (but finite, for any given integer) size.

Any given integer can be represented in a finite number of bits. The
same is not true for any particular given logarithm. Some of them
(such as log2(256)) can be represented in a finite number of bits, but
some of them cannot.

- Logan
Oct 13 '06 #91
Logan wrote:
) Any given integer can be represented in a finite number of bits. The
) same is not true for any particular given logarithm. Some of them
) (such as log2(256)) can be represented in a finite number of bits, but
) some of them cannot.

Well, you can't even represent 1/3 in a finite number of bits, then.
SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
Oct 13 '06 #92

"Logan Shaw" <ls**********@austin.rr.comwrote in message
news:nq******************@tornado.texas.rr.com...
Andrey Tarasevich wrote:
>Logan Shaw wrote:
>>Andrey Tarasevich wrote:
>>>The only
reason it is an "approximation" is that the log-based algorithm of
calculating factorial starts to suffer from rounding errors sooner than
the straightforward algorithm.
>>No, the straightforward algorithm involves only rational numbers (which
can be represented with a finite amount of information) and thus never
needs to suffer from rounding errors. The log-based one must suffer
from
rounding errors since it involves irrational numbers (which require an
infinite amount of information to be represented directly).
>No, that's not what I'm talking about. What I'm trying to say is that
assuming that we use, say, 4-byte numbers to represent intermediate
results in the program, either method will start to loose precision
sooner or later.

We could assume 4-byte integers, but that would be an artificial
restriction. Virtually every major computer language has support
for integers of unbounded (but finite, for any given integer) size.

Any given integer can be represented in a finite number of bits. The
same is not true for any particular given logarithm. Some of them
(such as log2(256)) can be represented in a finite number of bits, but
some of them cannot.
I just about shook myself apart laughing reading what I could of this
thread. My only question would be what the original post possibly could
have been. EC
Oct 13 '06 #93
"Elijah Cardon" <in*****@invalid.netwrites:
"Logan Shaw" <ls**********@austin.rr.comwrote in message
news:nq******************@tornado.texas.rr.com...
>Andrey Tarasevich wrote:
>>Logan Shaw wrote:
Andrey Tarasevich wrote:
>>>>The only
reason it is an "approximation" is that the log-based algorithm of
calculating factorial starts to suffer from rounding errors sooner than
the straightforward algorithm.
>>>No, the straightforward algorithm involves only rational numbers (which
can be represented with a finite amount of information) and thus never
needs to suffer from rounding errors. The log-based one must suffer
from
rounding errors since it involves irrational numbers (which require an
infinite amount of information to be represented directly).
>>No, that's not what I'm talking about. What I'm trying to say is that
assuming that we use, say, 4-byte numbers to represent intermediate
results in the program, either method will start to loose precision
sooner or later.

We could assume 4-byte integers, but that would be an artificial
restriction. Virtually every major computer language has support
for integers of unbounded (but finite, for any given integer) size.

Any given integer can be represented in a finite number of bits. The
same is not true for any particular given logarithm. Some of them
(such as log2(256)) can be represented in a finite number of bits, but
some of them cannot.
I just about shook myself apart laughing reading what I could of this
thread. My only question would be what the original post possibly could
have been. EC
It was:

From: jmcgill <jm*****@email.arizona.edu>
Subject: Number of digits in N!
Newsgroups: comp.lang.c, comp.programming, sci.math
Date: Thu, 05 Oct 2006 08:54:34 -0700
Organization: Cox Communications
Message-ID: <GS9Vg.766$rS.181@fed1read05>

Hello.

Is there a method for computing the number of digits, in a given numeric
base, of N factorial, without actually computing the factorial?

For example, 8! has 5 digits in base 10; 10! has 7 digits in base 10.
Is there a way to compute that information without actually evaluating
the factorial?

--
__Pascal Bourguignon__ http://www.informatimago.com/

PLEASE NOTE: Some quantum physics theories suggest that when the
consumer is not directly observing this product, it may cease to
exist or will exist only in a vague and undetermined state.
Oct 13 '06 #94
Willem wrote:
Logan wrote:
) Any given integer can be represented in a finite number of bits. The
) same is not true for any particular given logarithm. Some of them
) (such as log2(256)) can be represented in a finite number of bits, but
) some of them cannot.

Well, you can't even represent 1/3 in a finite number of bits, then.
Sure you can: with a `ratio` object with numerator 1 and denominator 3:

Setpop
1_/2 =>
** 1_/2
datakey(1_/2) =>
** <key ratio>

You could, if you wanted to, play this same game in C.

--
Chris "Essen -6 and counting" Dollin
Scoring, bah. If I want scoring I'll go play /Age of Steam/.

Oct 13 '06 #95
"Pascal Bourguignon" writes:
>I just about shook myself apart laughing reading what I could of this
thread. My only question would be what the original post possibly could
have been. EC

It was:
Is there a method for computing the number of digits, in a given numeric
base, of N factorial, without actually computing the factorial?

For example, 8! has 5 digits in base 10; 10! has 7 digits in base 10.
Is there a way to compute that information without actually evaluating
the factorial?
The correct answer to which is "That is off topic. In this group we only
discuss the standardized C language.Instead we see 95 responses. So much for
topicality.
Oct 13 '06 #96
osmium wrote:
"Pascal Bourguignon" writes:
>>I just about shook myself apart laughing reading what I could of this
thread. My only question would be what the original post possibly could
have been. EC

It was:
>Is there a method for computing the number of digits, in a given numeric
base, of N factorial, without actually computing the factorial?

For example, 8! has 5 digits in base 10; 10! has 7 digits in base 10.
Is there a way to compute that information without actually evaluating
the factorial?

The correct answer to which is "That is off topic. In this group we only
discuss the standardized C language.Instead we see 95 responses. So much for
topicality.
It was cross-posted.
--
Chris "Essen -6 and counting" Dollin
"People are part of the design. It's dangerous to forget that." /Star Cops/

Oct 13 '06 #97

"Pascal Bourguignon" <pj*@informatimago.comwrote in message
news:87************@thalassa.informatimago.com...
"Elijah Cardon" <in*****@invalid.netwrites:
snip
>I just about shook myself apart laughing reading what I could of this
thread. My only question would be what the original post possibly could
have been. EC

It was:

From: jmcgill <jm*****@email.arizona.edu>
Subject: Number of digits in N!
Newsgroups: comp.lang.c, comp.programming, sci.math
Date: Thu, 05 Oct 2006 08:54:34 -0700
Organization: Cox Communications
Message-ID: <GS9Vg.766$rS.181@fed1read05>

Hello.

Is there a method for computing the number of digits, in a given numeric
base, of N factorial, without actually computing the factorial?

For example, 8! has 5 digits in base 10; 10! has 7 digits in base 10.
Is there a way to compute that information without actually evaluating
the factorial?
Given that the question has been shot up like James Caan at the tollbooth, I
would offer the following. WLOG we can do base ten. The latest version of
Stirling's formula will not only gives you a number; it will also give you
an upper bound on error. You give me an N, I'll give you an m s.t. N! is
less than 10**m . That gives us a starting point to count backwards. I
think this recasts the problem. I know that when I backtrack to zero, I
will either fall asleep or say the correct number. EC
*****************
Kenny Rogers tonight.
Oct 13 '06 #98
Chris Dollin wrote:
>The correct answer to which is "That is off topic. In this group we only
discuss the standardized C language.Instead we see 95 responses. So much for
topicality.

It was cross-posted.
Someone posted on comp.lang.c with a legitimate question about
formatting Pascal's Triangle in C. Newsgroup posters abused and shunned
that original questioner. I wrote a solution. I found it far to easy,
so I started looking for a way to improve my solution. In particular, I
wanted to be able to format my triangle in centered columns (as it is
usually presented) by simply pre-calculating only the *last* row. This
turned out to be beyond my number skills, because the the only method I
knew that did not involve sequentially evaluating the entire series in
advance, required the use of binomials in large factorials. These
numbers very quickly exceed the size of any standard C integer type.
But I believed I could get further, if I were able to do something else
to compute the length of that last line, so I looked for a way to
compute the number of digits in a huge number without computing the huge
number itself (obviously, I needed my memory tickled to remember the
reggae song about de log of the quotient equal to de difference of de
logs mon).
Anyway, I learned a great deal from the nubmer theory discussion that
ensued in sci.math, and I was quite surprised that my simple question
engendered such a response.

I was also surprised and disappointed that nobody else tried to advise
the original poster, and nobody ever commented on my two original solutions.
James
Oct 13 '06 #99
ch**********@hp.com wrote:
Willem wrote:
>Logan wrote:
) Any given integer can be represented in a finite number of bits. The
) same is not true for any particular given logarithm. Some of them
) (such as log2(256)) can be represented in a finite number of bits, but
) some of them cannot.

Well, you can't even represent 1/3 in a finite number of bits, then.

Sure you can: with a `ratio` object with numerator 1 and denominator 3:
Or you could use "Cantor expansion". (Google comes back with lots
of hits.)

Bart

--
The man without a .sig
Oct 13 '06 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

12
by: jose luis fernandez diaz | last post by:
Hi, My OS is: cronos:jdiaz:tmp>uname -a HP-UX cronos B.11.11 U 9000/800 820960681 unlimited-user license I compile in 64-bits mode the program below:
1
by: Shreyas Kulkarni | last post by:
hi there, recently i have got a problem regarding calculation of sum of digits in a floating point or precision number. the weird behaviour of compiler/language is preventing me from calculating...
10
by: guidosh | last post by:
Hello, I'm trying to write a printf statement that sets the field width when printing a number. I'm using this: printf("%*", fieldwidth, num_to_print); However, I can't figure out how to...
6
by: Jovo Mirkovic | last post by:
Hi, I have to make a program which will generate 100,000 different ID numbers (for example 2345-9341-0903-3432-3432 ...) It must be really different, I meen it can not be a similar (like...
27
by: Luke Wu | last post by:
Is there a C function that returns the number of digits in an input int/long? example: numdigits(123) returns 3 numdigits(1232132) returns 7
23
by: neha_chhatre | last post by:
which is the best format specifier(data type) if i have to work with decimal number. also please tell me the syntax for truncating a decimal number please reply as soon as possible
20
by: jacob navia | last post by:
Hi "How can I round a number to x decimal places" ? This question keeps appearing. I would propose the following solution #include <float.h> #include <math.h>
0
by: zephyrus360 | last post by:
This is about a technique to find the mod of a very large integer with a normal small integer. I recently encountered this problem when I needed to compute the modulus of a very large number with...
1
by: jlt206 | last post by:
This code <?php include("counter.php")?> on the webpage produces the count number. (function code below) I want to place the current number into a variable $MemberNo or into a FormField to be sent...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.