473,748 Members | 8,933 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Language efficiency of C versus FORTRAN et al

Hi,

I have read somewhere that C code sometimes cannot be compiled to be as
efficient as FORTRAN, eg for matrix multiplication, because a C
compiler cannot make the assumptions about arrays that a FORTRAN
compiler can. But I don't understand the example, not least because I
don't understand FORTRAN. I also don't understand why it is more
efficient in this case for a compiler to choose the order of evaluation
(or whatever it is that it does for matrix multiplication to make it
faster).

Can anyone explain all this, please? And how much speed-up might one
get from using FORTRAN over C for such things? What sort of compilers
offer the best performance for issues like this? Is there any general
advice about how to achieve efficient code for such linear algebra?

This is a fairly live issue, because matrix mulitplication (and other
things, like evaluating a dot product) often take an extremely long
time for large matrices and vectors.

I am also wondering how other languages like Pascal might compare to C
and Fortran in this regard; does Pascal have enough array structure to
allow compilers to take advantage of such optimisations?

Any more general docs on issues like this would also be interesting
reading.

Thanks

Nov 14 '05 #1
9 3822

<tr************ *@hotmail.com> wrote in message
news:11******** **************@ f14g2000cwb.goo glegroups.com.. .
Hi,

I have read somewhere that C code sometimes cannot be compiled to be as
efficient as FORTRAN, eg for matrix multiplication, because a C
compiler cannot make the assumptions about arrays that a FORTRAN
compiler can. But I don't understand the example, not least because I
don't understand FORTRAN. I also don't understand why it is more
efficient in this case for a compiler to choose the order of evaluation
(or whatever it is that it does for matrix multiplication to make it
faster).

Can anyone explain all this, please? And how much speed-up might one
get from using FORTRAN over C for such things? What sort of compilers
offer the best performance for issues like this? Is there any general
advice about how to achieve efficient code for such linear algebra?

This is a fairly live issue, because matrix mulitplication (and other
things, like evaluating a dot product) often take an extremely long
time for large matrices and vectors.

Although Fortran (since 15 years ago) provides matmul() and dot_product()
intrinsics, this is only a matter of convenience. As these operations, in
themselves, don't offer any danger of over-writing their operands, the
larger differences in potential efficiency lie elsewhere, provided you don't
choose an algorithm which is more awkward in one language than the other.
Compilers tend to ignore differences in restrictions on order of evaluation;
"vectorizin g" compilers are likely to implement them the same way in C and
Fortran. Even on a single CPU, it is generally necessary to break them down
into 8 or so batch sums which can be calculated in parallel. In principle,
the Fortran intrinsics could incorporate MP parallelism, but in practice,
that is likely to depend on application of OpenMP, which is not part of
either language, but works the same way with either.
For large matrix multiplication (say, with minimum dimension 24 or more) or
dot products, you may want to use a library optimized for your processor,
usually following the BLAS schemes. If you are trying to avoid wrapper
overhead in calling basic BLAS, that is written with an old-style Fortran
interface, which requires some study to emulate in C. Other than that,
differences in efficiency among calling languages or compilers disappear,
for those operations supported in the library.
If you do have operations which one compiler succeeds in vectorizing
effectively, while another fails, the speedup could range up to a factor of
6 or so on common processors. It may be more difficult to achieve that in
C, requiring full use of restrict keywords and the like, and maybe more hand
optimization of the code. If you are "restricted " to a C subset, or are
required to use options which support violations of the C standard, you may
have no hope of matching Fortran performance.
Nov 14 '05 #2
tr************* @hotmail.com wrote:
Hi,

I have read somewhere that C code sometimes cannot be compiled to be as
efficient as FORTRAN, eg for matrix multiplication, because a C
compiler cannot make the assumptions about arrays that a FORTRAN
compiler can. But I don't understand the example, not least because I
don't understand FORTRAN. I also don't understand why it is more
efficient in this case for a compiler to choose the order of evaluation
(or whatever it is that it does for matrix multiplication to make it
faster).

Can anyone explain all this, please? And how much speed-up might one
get from using FORTRAN over C for such things? What sort of compilers
offer the best performance for issues like this? Is there any general
advice about how to achieve efficient code for such linear algebra?

This is a fairly live issue, because matrix mulitplication (and other
things, like evaluating a dot product) often take an extremely long
time for large matrices and vectors.

I am also wondering how other languages like Pascal might compare to C
and Fortran in this regard; does Pascal have enough array structure to
allow compilers to take advantage of such optimisations?

Any more general docs on issues like this would also be interesting
reading.

Thanks


Fortran compilers have been optimised for SIMD architectures eg Cray
supercomputers. They improve the speed of vector/matrix calculations.

I presume the same is now true for C matrix libraries on SIMD machines.

See http://en.wikipedia.org/wiki/SIMD

gtoomey
Nov 14 '05 #3
In article <11************ **********@f14g 2000cwb.googleg roups.com>
<tr************ *@hotmail.com> wrote:
I have read somewhere that C code sometimes cannot be compiled to be as
efficient as FORTRAN, eg for matrix multiplication, because a C
compiler cannot make the assumptions about arrays that a FORTRAN
compiler can.
In theory, there is no reason the C code cannot be just as fast.
As a practical matter, however, this was true before C99. In
C99, one can now tell a compiler that a given pointer is the only
"handle" by which some array is accessed, through the "restrict"
type-qualifier.
But I don't understand the example, not least because I
don't understand FORTRAN. I also don't understand why it is more
efficient in this case for a compiler to choose the order of evaluation
(or whatever it is that it does for matrix multiplication to make it
faster).

Can anyone explain all this, please?
Assume we have some routine/function f() that has takes or more
pointers and performs some operation on using them. For instance,
consider the boring old Fortran "DAXPY" function, double-precision
a*X + Y where X and Y are vectors with N elements. (I am leaving
out the xinc and yinc parameters on purpose; this is not quite the
usual DAXPY.) The result of the multiply-and-add is stuffed back
into the vector Y. This gives the following simple C code:

void daxpy(size_t n, double a, double *x, double *y) {
size_t i;

for (i = 0; i < n; i++)
y[i] += a * x[i];
}

(aside from the increments and occasional optimizations for a==0.0
and such, this really *is* all there is to DAXPY).

In C, given the above function, the following is perfectly
legal as a call:

double arr[100];
... /* fill in some initial values for arr */ ...
daxpy(99, 3.0, &arr[0], &arr[1]);

Inside daxpy(), we compute "y[i] += a * x[i]"; this is now the
same as doing:

for (i = 0; i < 99; i++)
arr[i + 1] += a * arr[i];

This means that the first trip through the loop, we use arr[0] to
adjust arr[1], then we use the new, adjusted arr[1] to adjust
arr[2], and so on. In other words, the C compiler *must not*
"preload" the input values of arr[1] through arr[98] and use some
or all of those, instead of the new values that will be written
into arr[1] through arr[98] when i is 0 through 97 respectively.
The C code *must* use the newly computed values each time.

If this were Fortran code instead, the call would be spelled slightly
differently, using all-capitals (in F77 at least) and omitting the
"&"s and changing [] to ():

DAXPY(99, 3.0, ARR(0), ARR(1))

The kicker is that this call is *illegal* in Fortran. More precisely,
it invokes undefined behavior, just like "i = i++" in C. A Fortran
compiler does not have to detect the problem; no diagnostic is
required; but the code is allowed to misbehave in arbitrary ways.

The call is legal and well-defined in C, but not in Fortran. Well,
so what?

The answer is: "so, the Fortran compiler *can* preload the input
values". On some machines, this helps a little. On some machines,
this helps a lot. On some machines, it does not help at all.
And how much speed-up might one get from using FORTRAN over C for
such things?
A little (say, 5%), a lot (e.g., subroutine runs about 40 times
faster), or perhaps none at all.

C99 compilers (what few of them there are) have a new keyword,
"restrict". If we rewrite daxpy() as:

void daxpy(size_t n, double a, double *restrict x, double *restrict y) {
size_t i;

for (i = 0; i < n; i++)
y[i] += a * x[i];
}

then that C99 compiler is now allowed (but not required) to make
the *same* assumptions as the Fortran compiler. In other words,
this function is *less* useful to you (the programmer) than the
original, unrestricted version -- but by making it less useful
(i.e., more constrained), we tell the compiler more about expressions
like x[i] and y[i]. In particular, we have told the compiler --
whether it is true or not -- that x[i] and y[j] *never* name the
same underlying object, no matter what valid values of i and j are
used. The compiler may, if it chooses, "preload" some or all of
x[i] and/or y[i], if that speeds up the machine code.
What sort of compilers offer the best performance for issues like
this?
comp.lang.c is all about portable, Standard-conforming code;
performance is irrelevant here. Fortunately, it turns out that
the answer depends greatly on your particular hardware, and
hence a hardware-specific group is the right place to ask.
(*Which* hardware-specific group depends on the hardware, of
course.)
I am also wondering how other languages like Pascal might compare to C
and Fortran in this regard; does Pascal have enough array structure to
allow compilers to take advantage of such optimisations?


Pascal -- the original J&W version, anyway -- has such strong
constraints on arrays that the code can be very fast, but you cannot
write useful programs in it. :-) You will find, as a general rule,
that the more generally-useful the language, the more difficult it
is for a compiler to produce fast machine code for it (at least,
without "hints" that the program does not use most of those wonderful
features). On the other hand, if you can work directly "in the
problem" as it were -- for instance, writing mathematical equations
directly, rather than expanding them out into loops or subroutine
calls like DAXPY -- then you (the human) may be able to simplify
the problem, so that nowhere near as much machine code is required.
As a rule, the fastest and most reliable parts of any program are
those that are not there.

(For this last reason, I always thought it was kind of nutty to
write these programs in Fortran instead of just using APL... :-) )
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 14 '05 #4
To optimize the performance of linear algebra operations, one should
consider calling hardware-specific libraries, such as the Intel Math
Kernel Library http://www.intel.com/software/products/mkl/features.htm
or the AMD Core Math Library (ACML). Typically, such libraries are
callable from Fortran, C, and C++.

Nov 14 '05 #5
Chris Torek wrote:
On the other hand, if you can work directly "in the
problem" as it were -- for instance, writing mathematical equations
directly, rather than expanding them out into loops or subroutine
calls like DAXPY -- then you (the human) may be able to simplify
the problem, so that nowhere near as much machine code is required.
As a rule, the fastest and most reliable parts of any program are
those that are not there. (For this last reason, I always thought it was kind of nutty to
write these programs in Fortran instead of just using APL... :-) )


I think it's "kind of nutty" that you write this in the year 2004, when
the last 3 Fortran standards (1990, 1995, and 2003) have provided a
full set of array operations, including array sections, described at
http://www.pcc.qub.ac.uk/tec/courses...dentMIF_5.html
.. There are many Fortran 95 compilers available, including a free one
called g95 at http://www.g95.org .

Nov 14 '05 #6
>Chris Torek wrote:
(For this last reason, I always thought it was kind of nutty to
write these programs in Fortran instead of just using APL... :-) )

In article <11************ **********@f14g 2000cwb.googleg roups.com>
<be*******@aol. com> wrote:I think it's "kind of nutty" that you write this in the year 2004, when
the last 3 Fortran standards (1990, 1995, and 2003) have provided a
full set of array operations, including array sections, described at
http://www.pcc.qub.ac.uk/tec/courses...dentMIF_5.html
. There are many Fortran 95 compilers available, including a free one
called g95 at http://www.g95.org .


Well, I *was* kidding. Perhaps I should have said "F77" though.
It is true that F90 and F95 have greatly improved the language's
capabilities (and greatly changed it -- if people thought C90 to
C99 was a shock, try F77 to F90...). The last time I actually
*used* Fortran was in the days of F77, in any case.

One might also note that APL never really "made it" as a general-use
language, unlike Fortran. This may have been because it required
a special type-ball on the Selectric, to print out the funny
character-set -- remember, this language was used in the days of
printing terminals, and even depended on them in the form of
overstrike (quote-quad, for instance, was formed by typing both
the quad and quote characters in the same position, by backspacing
between the two) -- or perhaps because APL seemed to be one of
those "write-only languages": no one could read programs written
in APL, not even the author. :-) (On the other hand, if "ugly
syntax" is such a problem, perhaps F90-and-successors are doomed
too. :-) Oddly enough, Fortran afficionados use this same argument
against C...)

(I did a google search -- keywords "apl iverson", to avoid hits on
other APL TLAs -- and was surprised to find that the language is
still in use. Of course, today, with bitmapped displays, it is
easy enough to construct APL fonts. Input methods are a bit
problematic: if C programmers think {|} are troublesome on
international keyboards, well, where are your rho and iota keys,
unless you have a Greek keyboard? But see, e.g.,
<http://home.earthlink. net/~swsirlin/apl.faq.html>. A couple
of followon languages, J and K, are perhaps more suitable today,
though.)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 14 '05 #7
In article <11************ **********@f14g 2000cwb.googleg roups.com>,
tr************* @hotmail.com wrote:
Hi,

I have read somewhere that C code sometimes cannot be compiled to be as
efficient as FORTRAN, eg for matrix multiplication, because a C
compiler cannot make the assumptions about arrays that a FORTRAN
compiler can. But I don't understand the example, not least because I
don't understand FORTRAN. I also don't understand why it is more
efficient in this case for a compiler to choose the order of evaluation
(or whatever it is that it does for matrix multiplication to make it
faster).

Can anyone explain all this, please?

For details of various processors, go to www.intel.com, www.amd.com,
www.apple.com etc. etc. You will find all the information about
processors that you want, with as much detail as you want.

A Google search will at least find a freely available copy of the
Fortran 77 Standard, and a freely available copy of the last draft for
the C99 Standard, which contains enough information to understand the
differences between C and Fortran in this respect.
Nov 14 '05 #8
tr************* @hotmail.com wrote:
I have read somewhere
Do you remember where?
that C code sometimes cannot be compiled to be as efficient as FORTRAN,
e.g. for matrix multiplication,
because a C compiler cannot make the assumptions about arrays
that a FORTRAN compiler can. But I don't understand the example,
not least because I don't understand FORTRAN.
I also don't understand why it is more efficient in this case
for a compiler to choose the order of evaluation
(or whatever it is that it does
for matrix multiplication to make it faster). Can anyone explain all this, please?
And how much speed-up might one get from using FORTRAN over C for such things?
What sort of compilers offer the best performance for issues like this?
Is there any general advice
about how to achieve efficient code for such linear algebra? This is a fairly live issue
No. It is a *dead* issue.
because matrix mulitplication
(and other things, like evaluating a dot product)
often take an extremely long time for large matrices and vectors. I am also wondering how other languages like Pascal
Please ask in the comp.lang.pasca l newsgroup instead.
might compare to C and Fortran in this regard;
does Pascal have enough array structure to
allow compilers to take advantage of such optimisations?

Any more general docs on issues like this
would also be interesting reading.
cat main.f program main
real C(1024, 1024)
real B(1024, 1024)
real A(1024, 1024)
integer i, j, k
do j = 1, 1024
do i = 1, 1024
A(i, j) = i - 1 +(j - 1)*1024
B(i, j) = i - 1 +(j - 1)*1024
end do
end do
! C <-- A^TB (matrix-matrix dot product)
do j = 1, 1024
do i = 1, 1024
C(i, j) = 0.0
do k = 1, 1024
C(i, j) = C(i, j) + A(k, i)*B(k, j)
end do
end do
end do
end program main
f90 -O3 -o main main.f
time ./main 27.957u 0.105s 0:28.06 99.9% 0+0k 0+0io 0pf+0w cat main.c #include <stdio.h>
#include <limits.h>

int main(int argc, char* argv[]) {
const
int n = 1024;
float A[n][n];
float B[n][n];
float C[n][n];
for (size_t i = 0; i < n; ++i) {
for (size_t j = 0; j < n; ++j) {
A[i][j] = j + i*n;
B[i][j] = j + i*n;
}
}
// C <-- BA^T (matrix-matrix dot product)
for (size_t i = 0; i < n; ++i) {
for (size_t j = 0; j < n; ++j) {
C[i][j] = 0.0;
for (size_t k = 0; k < n; ++k) {
C[i][j] = C[i][j] + B[i][k]*A[j][k];
}
}
}
return 0;
}
gcc -Wall -std=c99 -pedantic -O3 -o main main.c
time ./main

31.365u 0.113s 0:31.62 99.5% 0+0k 0+0io 0pf+0w

Quality C and Fortran compilers will optimize these loops
in almost exactly the same way. The difference appears
when you pass arrays to "subprogram s".

void matrix_matrix_d ot(float C[][1024],
const float A[][1024], const float B[][1024]);

or
interface
subroutine matrix_matrix_d ot(C, A, B)
real, intent(in), dimension(1024, 1024):: A
real, intent(in), dimension(1024, 1024):: B
real, intent(out), dimension(1024, 1024):: C
end subroutine matrix_matrix_d ot
end interface

The problem is that the compiler does not know that
the destination array C is not an *alias*
for [part of] one of the source operands --
it can't be sure that programmers won't write

call matrix_matrix_d ot(A, A, B)

for example. If C or Fortran programmers do this,
they are going to get garbage in A
instead of the matrix-matrix dot product.
Fortran programmers are simply admonished *not* to do this
but, for some reason, C programmers expect to get the same thing
as if they had written

for (size_t i = 0; i < n; ++i) {
for (size_t j = 0; j < n; ++j) {
A[i][j] = 0.0;
for (size_t k = 0; k < n; ++k) {
A[i][j] = A[i][j] + B[i][k]*A[j][k];
}
}
}

at the same place in their program where they wrote

matrix_matrix_d ot(A, A, B);

which inhibits the C compiler from performing any optimizations
which might yield a different [wrong] result.

The new C99 standard allows programmers
to qualify aliases with the 'restrict' keyword
so that the C99 compiler is allowed to perform
the same optimizations as the Fortran compiler.

NOTE: Neither Fortran or C99 do anything
to enhance the safety of subprograms
that may modify input arrays through aliases.

keyword
Nov 14 '05 #9

In article <cq********@new s2.newsguy.com> , Chris Torek <no****@torek.n et> writes:

(I did a google search -- keywords "apl iverson", to avoid hits on
other APL TLAs -- and was surprised to find that the language is
still in use.
There's a fair bit of activity on comp.lang.apl. A number of free
APL implementations are available, though I haven't found one that's
completely satisfactory. What really seems to be lacking, though, is
a good online tutorial.

I posted an APL program to comp.programmin g just a few months back,
for one of those silly "write a program to do some trivial thing"
contests - I think it was to add the squares of the numbers from 1
to 10. My APL implementation was, of course, the shortest solution;
that's just the sort of thing APL has operators for.
Of course, today, with bitmapped displays, it is
easy enough to construct APL fonts. Input methods are a bit
problematic: if C programmers think {|} are troublesome on
international keyboards, well, where are your rho and iota keys,
unless you have a Greek keyboard?
Nah. rho is meta-r and iota is meta-i. The tough ones are the
characters that don't have an obvious mapping to standard keyboards
and that you don't use often enough to remember where they are - I
don't even know the names of some of the symbols on my keyboard
chart.
But see, e.g.,
<http://home.earthlink. net/~swsirlin/apl.faq.html>. A couple
of followon languages, J and K, are perhaps more suitable today,
though.)


There are some implementations of "APL in ASCII" too, which are APL2
workspaces that define ASCII names for all the APL special characters.
They're not so fun and pretty as using the APL glyphs, but they're
useful for posting source to Usenet and the like.

--
Michael Wojcik mi************@ microfocus.com

Advertising Copy in a Second Language Dept.:
The precious ovum itself is proof of the oath sworn to those who set
eyes upon Mokona: Your wishes will be granted if you are able to invest
it with eternal radiance... -- Noriyuki Zinguzi
Nov 14 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
1878
by: Jelle Feringa // EZCT / Paris | last post by:
After reading about extending python with C/Fortran in the excellent Python Scripting for Computational Science book by Hans Langtangen, I'm wondering whether there's not a more pythonic way of extending python. And frankly I think there is: OCAML Fortunately there is already a project up and running allowing you to extend python with OCAMl: http://pycaml.sourceforge.net/ Since I haven't got actual experience programming CAML I'd like...
24
2604
by: Xah Lee | last post by:
What is Expresiveness in a Computer Language 20050207, Xah Lee. In languages human or computer, there's a notion of expressiveness. English for example, is very expressive in manifestation, witness all the poetry and implications and allusions and connotations and dictions. There are a myriad ways to say one thing, fuzzy and warm and all. But when we look at what things it can say, its power of
66
3132
by: Prashant | last post by:
There are lot of dicussion on C# is much better than C++. Why there is no language to compute C#. This way we are accepting monopoly of M$. Why there is no group which seriously tring to develop new language better than C#.
54
5860
by: Matt | last post by:
How do we define systems programs? when we say systems programming, does it necessary mean that the programs we write need to interact with hardware directly? For example, OS, compiler, kernel, drivers, network protocols, etc...? Couple years ago, yes, I understand this is definitely true. However, as the software applications become more and more complicated, some people try to argue that. Some people argue the definition of systems...
134
8029
by: evolnet.regular | last post by:
I've been utilising C for lots of small and a few medium-sized personal projects over the course of the past decade, and I've realised lately just how little progress it's made since then. I've increasingly been using scripting languages (especially Python and Bourne shell) which offer the same speed and yet are far more simple and safe to use. I can no longer understand why anyone would willingly use C to program anything but the lowest...
6
1983
by: trentdk | last post by:
I want to test which language (testing C and FORTRAN) would be faster with math calculations; one test with intergers, and another test with floats. What math formulas/functions would you guys use that are simple, yet will tax the processor and the abilities of the languages? Thanks =)
7
3153
by: Edward Yang | last post by:
A few days ago I started a thread "I think C# is forcing us to write more (redundant) code" and got many replies (more than what I had expected). But after reading all the replies I think my question about local variable initialization is still not solved. And some of the replies forked into talking about out parameters. And the thread is becoming way too deep. So I open a new thread here. My question in the previous thead has turned...
23
3646
by: Xah Lee | last post by:
The Concepts and Confusions of Pre-fix, In-fix, Post-fix and Fully Functional Notations Xah Lee, 2006-03-15 Let me summarize: The LISP notation, is a functional notation, and is not a so-called pre-fix notation or algebraic notation. Algebraic notations have the concept of operators, meaning, symbols placed around arguments. In algebraic in-fix notation, different
32
2566
by: jhc0033 | last post by:
Interesting article I came across on Slashdot: http://developers.slashdot.org/developers/08/07/10/213211.shtml They are using C at JPL to program Mars Lander and just about everything now! Not Ada. Anyone got an explanation? I wonder also, do they really mean C++ when they say C. In my experience, this is a frequent, although disagreeable usage.
0
9534
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
9316
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9241
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8239
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
6073
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4597
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4867
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
2777
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2211
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.