473,796 Members | 2,712 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Long Num speed

For a while now i have been "playing" with a little C program to
compute the factorial of large numbers. Currently it takes aboy 1
second per 1000 multiplications , that is 25000P1000 will take about a
second. It will be longer for 50000P1000 as expected, since more digits
will be in the answer. Now, on the Num Analyses forum/Group there is a
post reaporting that that person wrot java code that computed 1000000!
in about a second. That is about 10000 times faste than I would expect
my code to do it. So the two possiblilities are:
1) I am doing something terribly wrong
2) The othr person is lying

At the moment i am inclined to believe that its number 1.

I am posing my code below, I would like to hear your opinions about
why it is slow and how i can improove its speed.

I know that there are public BIGNUM libraries which are already
optimized for such calculations, but I dont want to use them, bcause i
want to approach this problem on a lower level. I am mostly interested
to find out how to get this code perform faster or what alternative
algorythms i should consider. The factorial calculation is just a test
program.

=============== ====start paste========== =============== =

#include<stdio. h>
#include<stdlib .h>
#include<math.h >

#define al 1024*20
#define base 1000
typedef long int IntegerArrayTyp e;

struct AEI{
IntegerArrayTyp e data[al];
long int digits;
};

void pack(IntegerArr ayType i, struct AEI *N1);
void Amult(struct AEI * A, struct AEI * B, struct AEI * C);
void AEIprintf(struc t AEI * N1);
void AEIfprintf(FILE * fp, struct AEI * N1);
int main(void)
{

struct AEI *N1, *MO, *Ans;
long i = 0, j = 0, ii, NUM, iii;
FILE *ff;

N1=malloc(sizeo f(struct AEI));
MO=malloc(sizeo f(struct AEI));
Ans=malloc(size of(struct AEI));
while (i < al){
N1->data[i] = 0;
MO->data[i] = 0;
Ans->data[i]=0;
i++;
}

printf("Enter integer to Factorialize: ");
scanf("%ld", &NUM);

pack(1, N1);
pack(1, Ans);
ff = fopen("Results. txt", "w");
printf("you entered: %ld", NUM);

i=1;
while(i < NUM ){

iii=0;
while(iii<NUM && iii<1000){
ii = 1;
while (ii < al)
{
MO->data[ii] = 0;
ii++;
}
pack((i+iii), MO);
Amult(N1, MO, N1);
iii++;
}
i+=iii;
Amult(Ans, N1, Ans);
printf("\nProgr ess is: %d",i);
pack(1, N1);
}
if(ff!=NULL){
fprintf(ff,"\n% d\n",i-1);
AEIfprintf(ff, Ans);
}
fclose(ff);

printf("\nProgr ess: 100\%");

return 0;
}
void AEIprintf(struc t AEI *N1){

float fieldLength;
double temp;
char format1[8];
long j, FL0;
j = N1->digits-1;
FL0=(long)log10 ((float)base);
fieldLength = (float)log10((f loat)base);
temp = modf(fieldLengt h, &fieldLength );
format1[0] = '%';
format1[1] = '0';
format1[2] = fieldLength + 48;
format1[3] = 'd';
format1[4] = 0x00;

printf("%*d", FL0, N1->data[j]);
j--;

while (j >= 0)
{
printf(format1, N1->data[j]);

j--;
}

return;
}
void AEIfprintf(FILE * fp, struct AEI *N1){
long j = N1->digits-1;

double fieldLength, temp;
char format0[8], format1[8];

fieldLength = (int)log10(base );
temp = modf(fieldLengt h, &fieldLength );

format0[0] = '%';
format0[1] = fieldLength + 48;
format0[2] = 'd';
format0[3] = 0x00;
format1[0] = '%';
format1[1] = '0';
format1[2] = fieldLength + 48;
format1[3] = 'd';
format1[4] = 0x00;

fprintf(fp,form at0, N1->data[j]);
j--;

while (j >= 0){
fprintf(fp, format1, N1->data[j]);
j--;
}
return;
}

void pack(IntegerArr ayType i, struct AEI * N1)
{
long t = 1, i1, j = 0;

while (t == 1){
i1 = i % base;
N1->data[j] = i1;
i = (i - i1) / base;
j++;
if (i == 0)
t = 0;
}
N1->digits=j;
return;
}


void Amult(struct AEI * A, struct AEI * B, struct AEI * C){
/*C = A * B; */
long i, ii,d, result, carry=0, digits=0;
struct AEI *Ans;
Ans=malloc(size of(struct AEI));
i=0;
d= (A->digits+B->digits-1);
while(i<d){
Ans->data[i]=carry;
carry=0;
ii=0;
while(ii<=i){
if(B->data[ii]!=0){
Ans->data[i]+=A->data[i-ii]*B->data[ii];
carry+=Ans->data[i]/base;
Ans->data[i]=Ans->data[i]%base;
}
ii++;
}
carry+=Ans->data[i]/base;
Ans->data[i]=Ans->data[i]%base;

i++;
}
if(carry!=0){
d++;
Ans->data[i]=carry;
}

C->digits=d;
i=0;
while(i<d){
C->data[i]=Ans->data[i];
i++;
}
return;
}

=============== =====end paste========== =============== ===

I tried to indent the code with spaces instead of tabs, but if some
parts end up not properly indented, I hope no one will hold it against
me.

Thanks ahead

Nov 1 '06
35 2739
we******@gmail. com writes:
[...]
So here's what comes off the top of my head: Ask yourself the
following problem. How many factors of 2 are there in 1000000! ?
Certainly every other number is even. But every 4th number has 2
factors of 2, and every 8th number has 3 factors of two in it. So the
answer is:

f(2) = floor(1000000/2) + floor(1000000/4) + floor(1000000/8) + ...

Similarly we can figure out the number of factors of 3s, 5s, 7s, 11s,
and all the primes less than 1000000, as f(3), f(5), etc. Then the
result you are looking for is:

1000000! = pow(2,f(2))*pow (3,f(3))*pow(5, f(5))*...

Now, the question is -- what makes us think this will be any faster?
Well, the pow() function can be computed with successive squaring
tricks. Squaring faster than straight multiplying because (a*q^r + b)
^ 2 = a^2*q^(2*r) + b^2 + 2*a*b*(q^r). And the resulting big number
multiplies that you have to perform here can be accelerated using any
number of big number multiply acceleration tricks (Karatsuba,
Toom-Cook, or DFTs.).
Hmm, interesting approach. But does it really speed things up? If
you want to compute, say, 3**1024 with extended-precision integers,
successive squaring reduces the number of multiplications , but is it
faster than doing the 1023 multiplications ? I'm thinking that
multiplying, say, a 499-digit number by a 1-digit number might be
quicker than multiplying two 250-digit numbers.

I haven't really thought this through (feel free to say that that's
obvious).
I don't know how much faster, if any, doing things this way would be.
If you find a faster way in the literature, I would be interested in
know it.
--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 2 '06 #11
Chris Dollin said:
Frederick Gotham wrote:
>Probability suggests that he's lying, because only a very small
proportion of proficient programmers waste their time on mickey-mouse
hold-my-hand languages such as Java. Also, Java is slow.

Can we drop the insults, please?
That would at least be a start, but he still needs to make amends for past
insults.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Nov 2 '06 #12
Keith Thompson said:

<snip>
If
you want to compute, say, 3**1024 with extended-precision integers,
successive squaring reduces the number of multiplications , but is it
faster than doing the 1023 multiplications ? I'm thinking that
multiplying, say, a 499-digit number by a 1-digit number might be
quicker than multiplying two 250-digit numbers.
Sure, but obtaining the 499-digit number in the first place is likely to be
slower.

Measure, measure, measure!

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Nov 2 '06 #13
Keith Thompson wrote:
we******@gmail. com writes:
[...]
So here's what comes off the top of my head: Ask yourself the
following problem. How many factors of 2 are there in 1000000! ?
Certainly every other number is even. But every 4th number has 2
factors of 2, and every 8th number has 3 factors of two in it. So the
answer is:

f(2) = floor(1000000/2) + floor(1000000/4) + floor(1000000/8) + ...

Similarly we can figure out the number of factors of 3s, 5s, 7s, 11s,
and all the primes less than 1000000, as f(3), f(5), etc. Then the
result you are looking for is:

1000000! = pow(2,f(2))*pow (3,f(3))*pow(5, f(5))*...

Now, the question is -- what makes us think this will be any faster?
Well, the pow() function can be computed with successive squaring
tricks. Squaring faster than straight multiplying because (a*q^r + b)
^ 2 = a^2*q^(2*r) + b^2 + 2*a*b*(q^r). And the resulting big number
multiplies that you have to perform here can be accelerated using any
number of big number multiply acceleration tricks (Karatsuba,
Toom-Cook, or DFTs.).

Hmm, interesting approach. But does it really speed things up? If
you want to compute, say, 3**1024 with extended-precision integers,
successive squaring reduces the number of multiplications , but is it
faster than doing the 1023 multiplications ? I'm thinking that
multiplying, say, a 499-digit number by a 1-digit number might be
quicker than multiplying two 250-digit numbers.

I haven't really thought this through (feel free to say that that's
obvious).
I'll resist the temptation. Especially, since I only went half way. :)

Anyhow, the issue is not whether or not the squaring trick is saving
you overall performance (it does -- this is not controvertial.) The
real controversy with my approach is that its breaking numbers down
into its constituent factors first, then recombining them. I.e., its
faster to multiply 72 by 91 directly then to first break it down into 8
* 9 * 7 * 13. The hope, however, is that the number of primes are
sufficiently less than n (in this case there are about 87K primes under
1 million) to compensate for the cost of computing the power
computation. Notice that the vast majority of the primes will have a
power less than, say, 6, and that in fact the most common power is 1.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/

Nov 2 '06 #14
Frederick Gotham wrote:
fermineutron:
Now, on the Num Analyses forum/Group there is a
post reaporting that that person wrot java code that computed 1000000!
in about a second. That is about 10000 times faste than I would expect
my code to do it. So the two possiblilities are:
1) I am doing something terribly wrong
2) The othr person is lying

The burden of proof is on the Java dude.

Probability suggests that he's lying, because only a very small proportion of
proficient programmers waste their time on mickey-mouse hold-my-hand
languages such as Java. Also, Java is slow.

Either that or the algorithm's something stupid like:

char const *Func(void)
{
return "23495072395732 584712579750932 750923750932759 2387509";
}
1 million factorial is a very very very very very very big number. I
assure you that they did not solve the problem in the way you suggest
unless they are rigging against a very specific benchmark. Can you
imagine a library that's 100s of megs larger than necesary just to hold
a table of factorials?
Ask the Java dude if you can see his code. If he refuses, assume that he's a
liar, then egg his house.
Right, because nobody ever writes proprietary code.

In the case of bignum libraries, while I don't know this first hand,
the specification and architecture of the Java language itself suggests
very strongly to me that their bignum performance should be equal to
the best bignum libraries in existence, and faster than any portable
pure C bignum library. Again, I don't know this from direct first-hand
knowledge, but there is very strong possibility that you are completely
wrong on this point. Java may be very slow at many things, but if its
fast at anything, bignum is probably one of the prime candidates.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/

Nov 2 '06 #15
* fermineutron <fr**********@y ahoo.comwrote:
For a while now i have been "playing" with a little C program to
compute the factorial of large numbers. Currently it takes aboy 1
second per 1000 multiplications , that is 25000P1000 will take about a
second. It will be longer for 50000P1000 as expected, since more digits
will be in the answer. Now, on the Num Analyses forum/Group there is a
post reaporting that that person wrot java code that computed 1000000!
in about a second. That is about 10000 times faste than I would expect
my code to do it. So the two possiblilities are:
1) I am doing something terribly wrong
2) The othr person is lying

At the moment i am inclined to believe that its number 1.
Yes. I'm pretty sure that "the person" used the Prime-Schönhage algorithm and
the Java port of the apfloat library.

Using the C++ version of apfloat I get these times for 1000000! :

builtin factorial(): 45s

Prime-Schönhage: 15s
All on a Celeron 1.2 GHz. Apfloat's number theoretic transforms probably perform
much faster on a 64 bit platform.
Stefan Krah
Nov 2 '06 #16
we******@gmail. com wrote:
fermineutron wrote:
>Ian Collins wrote:
>>Before anyone reviews the code, have you profiled it?
If not, why not? If you have, where were the bottlenecks?

Its a factorial calculation. The bottleneck is in the big integer
multiply, which itself should have a bottleneck in the platform's
multiply.
>I profilled it, but there were no obvious bottlenecks which i
would not anticipate to be there by design.

here is the profiler output

http://igorpetrusky.awardspace.com/Temp/RunStats.html

I was thinking that maybe there is some other algorythm that is
better than mine for the long int arithemetic?

Probably so. Consider that you are doing nothing more than
computing the straight product of the numbers using no arithmetic
short cuts at all.

So here's what comes off the top of my head: Ask yourself the
following problem. How many factors of 2 are there in 1000000! ?
Certainly every other number is even. But every 4th number has 2
factors of 2, and every 8th number has 3 factors of two in it.
So the answer is:

f(2) = floor(1000000/2) + floor(1000000/4) + floor(1000000/8) + ...

Similarly we can figure out the number of factors of 3s, 5s, 7s,
11s, and all the primes less than 1000000, as f(3), f(5), etc.
Then the result you are looking for is:

1000000! = pow(2,f(2))*pow (3,f(3))*pow(5, f(5))*...

Now, the question is -- what makes us think this will be any
faster? Well, the pow() function can be computed with successive
squaring tricks. Squaring faster than straight multiplying
because (a*q^r + b) ^ 2 = a^2*q^(2*r) + b^2 + 2*a*b*(q^r). And
the resulting big number multiplies that you have to perform
here can be accelerated using any number of big number multiply
acceleration tricks (Karatsuba, Toom-Cook, or DFTs.).

I don't know how much faster, if any, doing things this way would
be. If you find a faster way in the literature, I would be
interested in know it.
This doesn't answer the 'faster' question, but your method is
basically the same as my algorithm for computing large factorials
in limited size registers, which I published here over 3 years ago.

/* compute factorials, extended range
on a 32 bit machine this can reach fact(15) without
unusual output formats. With the prime table shown
overflow occurs at 101.

Public domain, by C.B. Falconer. 2003-06-22
*/

#include <stdio.h>
#include <stdlib.h>
#include <limits.h>

/* 2 and 5 are handled separately
Placing 2 at the end attempts to preserve such factors
for use with the 5 factor and exponential notation
*/
static unsigned char primes[] = {3,7,11,13,17,1 9,23,29,31,37,
41,43,47,53,57, 59,61,67,71,
/* add further primes here -->*/
2,5,0};
static unsigned int primect[sizeof primes]; /* = {0} */

static double fltfact = 1.0;

/* ----------------- */

static
unsigned long int fact(unsigned int n, unsigned int *zeroes)
{
unsigned long val;
unsigned int i, j, k;

#define OFLOW ((ULONG_MAX / j) < val)

/* This is a crude mechanism for passing back values */
for (i = 0; i < sizeof primes; i++) primect[i] = 0;

for (i = 1, val = 1UL, *zeroes = 0; i <= n; i++) {
fltfact *= i; /* approximation */
j = i;
/* extract exponent of 10 */
while ((0 == (j % 5)) && (!(val & 1))) {
j /= 5; val /= 2;
(*zeroes)++;
}
/* Now try to avoid any overflows */
k = 0;
while (primes[k] && OFLOW) {
/* remove factors primes[k] */
while (0 == (val % primes[k]) && OFLOW) {
val /= primes[k];
++primect[k];
}
while (0 == (j % primes[k]) && OFLOW) {
j /= primes[k];
++primect[k];
}
k++;
}

/* Did we succeed in the avoidance */
if (OFLOW) {
#if DEBUG
fprintf(stderr, "Overflow at %u, %lue%u * %u\n",
i, val, *zeroes, j);
#endif
val = 0;
break;
}
val *= j;
}
return val;
} /* fact */

/* ----------------- */

int main(int argc, char *argv[])
{
unsigned int x, zeroes;
unsigned long f;

if ((2 == argc) && (1 == sscanf(argv[1], "%u", &x))) {
if (!(f = fact(x, &zeroes))) {
fputs("Overflow \n", stderr);
return EXIT_FAILURE;
}

printf("Factori al(%u) == %lu", x, f);
if (zeroes) printf("e%u", zeroes);
for (x = 0; primes[x]; x++) {
if (primect[x]) {
printf(" * pow(%d,%d)", primes[x], primect[x]);
}
}
putchar('\n');
printf("or approximately %.0f.\n", fltfact);
return 0;
}
fputs("Usage: fact n\n", stderr);
return EXIT_FAILURE;
} /* main */

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home .att.net>
Nov 2 '06 #17
CBFalconer wrote:
we******@gmail. com wrote:
fermineutron wrote:
Ian Collins wrote:

Before anyone reviews the code, have you profiled it?
If not, why not? If you have, where were the bottlenecks?
Its a factorial calculation. The bottleneck is in the big integer
multiply, which itself should have a bottleneck in the platform's
multiply.
I profilled it, but there were no obvious bottlenecks which i
would not anticipate to be there by design.

here is the profiler output

http://igorpetrusky.awardspace.com/Temp/RunStats.html

I was thinking that maybe there is some other algorythm that is
better than mine for the long int arithemetic?
Probably so. Consider that you are doing nothing more than
computing the straight product of the numbers using no arithmetic
short cuts at all.

So here's what comes off the top of my head: Ask yourself the
following problem. How many factors of 2 are there in 1000000! ?
Certainly every other number is even. But every 4th number has 2
factors of 2, and every 8th number has 3 factors of two in it.
So the answer is:

f(2) = floor(1000000/2) + floor(1000000/4) + floor(1000000/8) + ...

Similarly we can figure out the number of factors of 3s, 5s, 7s,
11s, and all the primes less than 1000000, as f(3), f(5), etc.
Then the result you are looking for is:

1000000! = pow(2,f(2))*pow (3,f(3))*pow(5, f(5))*...

Now, the question is -- what makes us think this will be any
faster? Well, the pow() function can be computed with successive
squaring tricks. Squaring faster than straight multiplying
because (a*q^r + b) ^ 2 = a^2*q^(2*r) + b^2 + 2*a*b*(q^r). And
the resulting big number multiplies that you have to perform
here can be accelerated using any number of big number multiply
acceleration tricks (Karatsuba, Toom-Cook, or DFTs.).

I don't know how much faster, if any, doing things this way would
be. If you find a faster way in the literature, I would be
interested in know it.

This doesn't answer the 'faster' question, but your method is
basically the same as my algorithm for computing large factorials
in limited size registers, which I published here over 3 years ago.
Your solution computes a mathematically equivalent re-expression, but
the similarities end there. The differences start where the
*algorithms* begin. Your algorithm counts each factor individually
(++primect[k]), paying a divide and modulo penalty for each occurring
factor for each number one by one. As you can see in my formulations
above, calculating each f(p) will perform log(p,n) divides and no
modulos. So yeah, your solution definately doesn't answer the 'faster'
question.
/* compute factorials, extended range
on a 32 bit machine this can reach fact(15) without
unusual output formats. With the prime table shown
overflow occurs at 101.
The OP asked for 1000000!. You're off by 4 orders of magnitude. For
factorials that small, why wouldn't you just create a table?
Public domain, by C.B. Falconer. 2003-06-22
*/

#include <stdio.h>
#include <stdlib.h>
#include <limits.h>

/* 2 and 5 are handled separately
Placing 2 at the end attempts to preserve such factors
for use with the 5 factor and exponential notation
*/
static unsigned char primes[] = {3,7,11,13,17,1 9,23,29,31,37,
41,43,47,53,57, 59,61,67,71,
/* add further primes here -->*/
2,5,0};
Ok, seriously dude. Do I need to say "640K ought to be enough for
anyone" yet again to you? What's your problem with prime number
sequences anyways? This is the second time I've seen you express them
in a miniscule finite table, for problems that require an unbounded
above sequence of them.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/

Nov 2 '06 #18

fermineutron wrote:
For a while now i have been "playing" with a little C program to
compute the factorial of large numbers.
You asked this question before, and I gave you the answer.

You're doing one digit at a time. Doing it with 32 bits at a time will
be at least 400,000,000 times faster. Why,? because with 32 bits you
can store numbers up to about 4 billion, AND you don't need to do a
divide and a mod each time.

Nov 2 '06 #19

Ancient_Hacker wrote:
You asked this question before, and I gave you the answer.

You're doing one digit at a time. Doing it with 32 bits at a time will
be at least 400,000,000 times faster. Why,? because with 32 bits you
can store numbers up to about 4 billion, AND you don't need to do a
divide and a mod each time.
I am doing 3 digits at the time, see the base constant at the top of a
file. Theoretically i could use 5 digits, or if to make the code only
compatible with C99 hence using 64bit long long int i could increase
base even higher. The restraint on the base is that base^2 must fit in
largest int type avaiable.

Nov 2 '06 #20

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

10
2235
by: Willem | last post by:
When I run the follwing code using Python 2.3: from time import clock t1 = clock () for i in range (10000): a = int ('bbbbaaaa', 16) t2 = clock () for i in range (10000): a = long ('bbbbaaaa', 16) t3 = clock () print (t2-t1) / (t3-t2)
2
10068
by: OakRogbak_erPine | last post by:
My company is considering purchasing MS SQL Server to run an application on (SASIxp). I am mainly familiar with Oracle, so I was wondering how long it would take to copy a database. Basically we have database A and each night we want to replace database B with the contents of A. How long would this take say if we had a 10GB database or a 20GB database. What would be the technique to do this nightly, the Copy Database Wizard, Snapshot...
13
2542
by: Jeff Melvaine | last post by:
I note that I can write expressions like "1 << 100" and the result is stored as a long integer, which means it is stored as an integer of arbitrary length. I may need to use a large number of these, and am interested to know whether the storage efficiency of long integers is in danger of breaking my code if I use too many. Would I do better to write a class that defines bitwise operations on arrays of integers, each integer being assumed...
15
2362
by: cody | last post by:
We have a huge project, the solutuion spans 50 projects growing. Everytime I want to start the project I have to wait nearly over 1 minute for the compiler to complete building. This is unaccaptable. I thought about loading only the project I need into visual studio and not the whole solution. The problem is that the compiler tells me it cannot find the referenced dlls (project references) although they are all lying in their bin and obj...
45
7482
by: Trevor Best | last post by:
I did a test once using a looping variable, first dimmed as Integer, then as Long. I found the Integer was quicker at looping. I knew this to be true back in the 16 bit days where the CPU's (80286) word size was 16 bits same as an integer. Now with a 32 bit CPU I would have expected the long to be faster as it's the same size as the CPU's word size so wouldn't need sawing in half like a magician's assistant to calculate on like an...
0
1209
by: Claire | last post by:
My application has a thread reading byte arrays from an unmanaged dll(realtime controller monitoring). The array represents an unmanaged struct containing a series of header fields plus a variable sized array of upto 300 structs. Current version of c# doesn't support this sort of struct hence I just pass an array of bytes to the dll. When I read these in, I use a binaryreader to process the data to fill out the c# instance of my class...
3
2380
by: ajaksu | last post by:
Hello c.l.p.ers :) Running long(Decimal) is pretty slow, and the conversion is based on strings. I'm trying to figure out whether there is a good reason for using strings like in decimal.py (that reason would be bound to bite me down the road). This converts Decimal to long and is much faster in my test system (PIII 650MHz, but was written on a P133 a year ago :)). def dec2long(number):
1
2915
by: =?Utf-8?B?QWxCcnVBbg==?= | last post by:
I apparently posted this in a wrong group ... one intended for pre-.Net development using VB. Anyway... I have a solution containing a project of Web pages, a project that contains all my business objects, five Web Service projects and three Windows Services projects. I'm only working with a relatively small portion of the code ... primarily the business objects, Web pages and one of the Web Services ... but I need to make sure that...
28
4989
by: Bartc | last post by:
From an article about implementing a C99 to C90 translator... How does someone use integer arithmetic of at least 64 bits, and write in pure C90? As I understand C90 (the standard is very elusive), long int is guaranteed to be at least 32 bits only. So, do people rely on their known compiler limits, use clunky emulations, or do not bother with 64 bits when writing ultra-portable code?
0
9673
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
10217
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10168
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10003
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9047
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
6785
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5568
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
3730
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2924
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.