473,883 Members | 2,607 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Checking return values for errors, a matter of style?

I've written a piece of code that uses sockets a lot (I know that
sockets aren't portable C, this is not a question about sockets per
se). Much of my code ended up looking like this:

Expand|Select|Wrap|Line Numbers
  1. if (function(socket, args) == -1) {
  2. perror("function");
  3. exit(EXIT_FAILURE);
  4. }
I feel that the ifs destroy the readability of my code. Would it be
better to declare an int variable (say succ) and use the following
structure?

Expand|Select|Wrap|Line Numbers
  1. int succ;
  2.  
  3. succ = function(socket, args);
  4. if (succ == -1) {
  5. perror("function");
  6. exit(EXIT_FAILURE);
  7. }
What's considered "best practice" (feel free to substitute with: "what
do most good programmers use")?
Jul 31 '06
66 3672
Skarmander said:
Richard Heathfield wrote:
<snip>
>>
The response to a failure depends on the situation. I've covered this in
some detail in my one-and-only contribution to "the literature",

I take it you're referring to "C Unleashed"; I haven't read it and am not
currently in a position to read it, so I hope you'll excuse me if I
clumsily raise points you discuss in depth in the book.
>so I'll just bullet-point some possible responses here:

* abort the program. The "student solution" - suitable only for high
school
students and, perhaps, example programs (with a big red warning flag).

This is not doing justice to the significant amount of work required to
make a program robust in the face of memory exhaustion. Quite bluntly:
it's not always worth it in terms of development time versus worst
possible outcome and likelihood of that outcome, even for programs used
outside high school.
Perhaps I should have made it clearer that I'm referring to programs that
are intended to be used over and over and over by lotsa lotsa people. If
it's not worth the programmer's time to write the code robustly, it's not
worth my time to use his program. Of course, it may be worth /his/ time to
use his own program.
I'm not saying the tradeoffs involved are always correctly assessed (it's
probably a given that the cost of failure is usually underestimated) , but
I do believe they exist.
Yes, the cost of failure can be a lost customer, repeated N times.
I presume that instead of "aborting the program" we may read "exiting the
program immediately but as cleanly as possible", by the way, with the
latter being just that bit more desirable.
You may indeed. I was using "abort" in a more general sense, not the
technical C sense (std lib function).
>* break down the memory requirement into two or more sub-blocks.

Applicable only if the failure is a result of trying to allocate more
memory than you really need in one transaction, which is either a flaw or
an inappropriate optimization (or both, depending on your point of view).
No, you've misunderstood. I'm thinking about situations where you would
ideally like a contiguous block of N bytes, but would be able to manage
with B blocks of (N + B - 1) / B bytes (or, more generally, a bunch of
blocks that between them total N bytes). The allocator might find that
easier to manage.
>* use less memory!

Will solve the problem, in the sense that "don't do that then" will cure
any pain you may experience while moving your arm. The bit we're
interested in is when you've decided that you absolutely have to move your
arm.
Consider reading a complete line into memory. It's good practice to extend
the buffer by a multiple (typically 1.5 or 2) whenever you run out of RAM.
So - let's say you've got a buffer of 8192 bytes, and you've filled it but
you still haven't finished reading the line. So you try to realloc to
16384, but realloc says no. Well, nothing in the rules says you're
necessarily going to need all that extra RAM, so it might be worth trying
to realloc to 8192 + 4096 = 12288 or something like that instead.
>* point to a fixed-length buffer instead (and remember not to free it!)

Thread unsafe (unavoidably breaches modularity by aliasing a global, if
you like it more general), increased potential for buffer overflow,
requires checking for an exceptional construct in the corresponding free()
wrapper (I'm assuming we'd wrap this).
Who says it has to be global? It would never have occurred to me to use a
file scope object for such a purpose.

I don't see how this would ever be preferential to your next solution:
>* allocate an emergency reserve at the beginning of the program

How big of an emergency reserve, though? To make this work, your biggest
allocation should never exceed the emergency reserve (this could be
tedious to enforce, but is doable) and it will only allow you to complete
whatever you're doing right now.
Yes, but that might be enough to get the user's data to a point where it can
be saved, and reloaded later in a more memory-rich environment.

<snip>
>* use virtual memory on another machine networked to this one (this
is a lot of work, but it may be worth it on super-huge projects)
This is not really an answer to "how do I deal with memory allocation
failure in a program", but to "how do I make sure my program doesn't
encounter memory allocation failure".
Well, that would be nice, but I presume that most programmers would rather
have their RAM nice and local, where they can get at it quickly and easily.
It's a port in a storm, not a general purpose allocation strategy.

<snip>

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 1 '06 #31
"Johan Tibell" <jo**********@g mail.comwrites:
Morris Dovey wrote:
>IMO, "best practice" is to detect all detectable errors and recover
from all recoverable errors - and to provide a clear explanation
(however terse) of non-recoverable errors.

I have difficulty imagining that any "good programmer" would ignore
errors and/or fail to provide recovery from recoverable errors in
production software.

The topic of this post is not whether to check errors or not.
It is now.
Read the original message.
We've moved on.

--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #32
Richard Heathfield <in*****@invali d.invalidwrites :
[...]
The response to a failure depends on the situation. I've covered this in
some detail in my one-and-only contribution to "the literature", so I'll
just bullet-point some possible responses here:

* abort the program. The "student solution" - suitable only for high school
students and, perhaps, example programs (with a big red warning flag).
* break down the memory requirement into two or more sub-blocks.
* use less memory!
* point to a fixed-length buffer instead (and remember not to free it!)
* allocate an emergency reserve at the beginning of the program
* use virtual memory on another machine networked to this one (this
is a lot of work, but it may be worth it on super-huge projects)
[...]

Out of curiosity, how often do real-world programs really do something
fancy in response to a malloc() failure?

The simplest solution, as you say, is to immediately abort the program
(which is far better than ignoring the error). The next simplest
solution is to do some cleanup (print a coherent error message, flush
buffers, close files, release resources, log the error, etc.) and
*then* abort the program.

I've seen suggestions that, if a malloc() call fails, the program can
fall back to an alternative algorithm that uses less memory. How
realistic is this? If there's an algorithm that uses less memory, why
not use it in the first place? (The obvious answer: because it's
slower.) Do programmers really go to the effort of implementing two
separate algorithms, one of which will be used only on a memory
failure (and will therefore not be tested as thoroughly as the primary
algorithm)?

--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #33
Keith Thompson said:

<snip>
>
Out of curiosity, how often do real-world programs really do something
fancy in response to a malloc() failure?
Six times, for sufficiently variable values of six.
The simplest solution, as you say, is to immediately abort the program
(which is far better than ignoring the error). The next simplest
solution is to do some cleanup (print a coherent error message, flush
buffers, close files, release resources, log the error, etc.) and
*then* abort the program.
In other words, get out gracefully, preferably without losing any user data.
But that may involve completing the current task for which you wanted
memory in the first place.
I've seen suggestions that, if a malloc() call fails, the program can
fall back to an alternative algorithm that uses less memory. How
realistic is this?
I've had to do it myself in "real" code.
If there's an algorithm that uses less memory, why
not use it in the first place?
The answer is embarrassingly obvious.
(The obvious answer: because it's slower.)
Quite so.
Do programmers really go to the effort of implementing two
separate algorithms, one of which will be used only on a memory
failure
Yes, sometimes, if the situation warrants it. Often, it won't, but often !=
always. Normally, the goal is to fail gracefully without losing user data.
But some programs Must Not Fail - e.g. safety-critical stuff.
(and will therefore not be tested as thoroughly as the primary
algorithm)?
It needs to be tested thoroughly. The primary algorithm will be "tested"
more thoroughly in the sense that it's used more often and so there's a
greater likelihood that bugs will be exposed, but that's true of a lot of
code - for example, an OS's "copy a file" operation is tested more
thoroughly than its "format a disk drive" operation, simply by virtue of
the fact that users are more likely to /use/ the file copy, and every use
is a test. That doesn't mean the format code didn't get an appropriate
level of testing.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 1 '06 #34
Richard Heathfield wrote:
Skarmander said:
>Richard Heathfield wrote:
<snip>
>>The response to a failure depends on the situation. I've covered this in
some detail in my one-and-only contribution to "the literature",
I take it you're referring to "C Unleashed"; I haven't read it and am not
currently in a position to read it, so I hope you'll excuse me if I
clumsily raise points you discuss in depth in the book.
>>so I'll just bullet-point some possible responses here:

* abort the program. The "student solution" - suitable only for high
school
students and, perhaps, example programs (with a big red warning flag).
This is not doing justice to the significant amount of work required to
make a program robust in the face of memory exhaustion. Quite bluntly:
it's not always worth it in terms of development time versus worst
possible outcome and likelihood of that outcome, even for programs used
outside high school.

Perhaps I should have made it clearer that I'm referring to programs that
are intended to be used over and over and over by lotsa lotsa people. If
it's not worth the programmer's time to write the code robustly, it's not
worth my time to use his program. Of course, it may be worth /his/ time to
use his own program.
You're using "robust" as if it's a yes or no property. You're free to storm
off in a huff shouting "well that's not robust then!" if someone presents
you with a program that can deal with just about anything except memory
exhaustion when it finally comes round the bend, but that doesn't mean
everyone would or should.
>I'm not saying the tradeoffs involved are always correctly assessed (it's
probably a given that the cost of failure is usually underestimated) , but
I do believe they exist.

Yes, the cost of failure can be a lost customer, repeated N times.
Don't sell it short. It could be a DEAD CUSTOMER, repeated N times.

Or it could be a single entry in a log somewhere.
>>* break down the memory requirement into two or more sub-blocks.
Applicable only if the failure is a result of trying to allocate more
memory than you really need in one transaction, which is either a flaw or
an inappropriate optimization (or both, depending on your point of view).

No, you've misunderstood. I'm thinking about situations where you would
ideally like a contiguous block of N bytes, but would be able to manage
with B blocks of (N + B - 1) / B bytes (or, more generally, a bunch of
blocks that between them total N bytes). The allocator might find that
easier to manage.
I see what you're getting at; the allocator may be unable to satisfy your
request due to fragmentation and the like. This doesn't work very well as a
strategy to recover, however; you're looking at writing a program that
allocates either a contiguous region of bytes or (if that should happen not
to work) finds some way to deal with a bunch of regions.

If you need to be able to handle the latter, though, you'll write code that
deals with a bunch of regions in the first place, with the amount of regions
possibly equal to 1. (You may choose to split these cases, but it's unlikely
to become faster or more maintainable.)

Of course, then you need to settle on some sort of minimum region size
you're willing to accept, somewhere between the absolute lower bound of the
minimum allocator overhead and the minimum meaningful region for your
application.

All this will not make the program more *robust*, however, it'll simply
allow it to make better use of existing resources. ("Simply" should
nevertheless not be taken lightly.) Robustness is given by how well the
system behaves when the resources run dry, not how well it squeezes water
from the remaining stones when there's a drought in sight.
>>* use less memory!
Will solve the problem, in the sense that "don't do that then" will cure
any pain you may experience while moving your arm. The bit we're
interested in is when you've decided that you absolutely have to move your
arm.

Consider reading a complete line into memory. It's good practice to extend
the buffer by a multiple (typically 1.5 or 2) whenever you run out of RAM.
So - let's say you've got a buffer of 8192 bytes, and you've filled it but
you still haven't finished reading the line. So you try to realloc to
16384, but realloc says no. Well, nothing in the rules says you're
necessarily going to need all that extra RAM, so it might be worth trying
to realloc to 8192 + 4096 = 12288 or something like that instead.
Thanks, that explanation was a bit more meaningful than "use less memory". :-)

This is a good amendment to a standard exponential allocation strategy.
>>* point to a fixed-length buffer instead (and remember not to free it!)
Thread unsafe (unavoidably breaches modularity by aliasing a global, if
you like it more general), increased potential for buffer overflow,
requires checking for an exceptional construct in the corresponding free()
wrapper (I'm assuming we'd wrap this).

Who says it has to be global? It would never have occurred to me to use a
file scope object for such a purpose.
So N fixed-length buffers, then? Obviously N can't be dynamic, or you're
back to square one.
>I don't see how this would ever be preferential to your next solution:
>>* allocate an emergency reserve at the beginning of the program
How big of an emergency reserve, though? To make this work, your biggest
allocation should never exceed the emergency reserve (this could be
tedious to enforce, but is doable) and it will only allow you to complete
whatever you're doing right now.

Yes, but that might be enough to get the user's data to a point where it can
be saved, and reloaded later in a more memory-rich environment.
Yes, as I said, of all your solutions, this one comes closest to actually
making things more robust in the face of imminent failure. The other items,
while helpful, delay the inevitable. Valuable as that is, I'm more
interested in what to do when the inevitable comes around, as it inevitably
will.
<snip>
>>* use virtual memory on another machine networked to this one (this
is a lot of work, but it may be worth it on super-huge projects)
This is not really an answer to "how do I deal with memory allocation
failure in a program", but to "how do I make sure my program doesn't
encounter memory allocation failure".

Well, that would be nice, but I presume that most programmers would rather
have their RAM nice and local, where they can get at it quickly and easily.
It's a port in a storm, not a general purpose allocation strategy.
A strange sort of port, when you need to first build a ship, then tow it
across the sea to its destination.

It's a "you might consider it" thing that seems very problem-specific to me;
"link computers together so you have more resources" is certainly an
approach, but not one I'd expect to see as a recommendation for making
programs more robust. That's not to say the suggestion itself isn't
valuable, of course.

S.
Aug 1 '06 #35
Richard Heathfield <in*****@invali d.invalidwrites :
Keith Thompson said:
[...]
>Do programmers really go to the effort of implementing two
separate algorithms, one of which will be used only on a memory
failure

Yes, sometimes, if the situation warrants it. Often, it won't, but often !=
always. Normally, the goal is to fail gracefully without losing user data.
But some programs Must Not Fail - e.g. safety-critical stuff.
In that context, I would think it would *usually* make more sense to
use just the slower and more robust algorithm in the first place.

Possibly the faster and more memory-intensive method would be
necessary to meet deadlines, and if you run out of memory you can fall
back to the slower method and continue running in a degraded mode.
But then again, safety-critical real-time code tends not to use
dynamic memory allocation at all.

I think that, 99% of the time, the best response to an allocation
failure is to clean up and abort. (The tendency to omit the "clean
up" portion is regrettable; the tendency to omit the "abort" portion,
i.e., to ignore the error, is even more so.) That other 1% requires
the other 99% of the effort.

--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 1 '06 #36
Keith Thompson <ks***@mib.orgw rites:
Do programmers really go to the effort of implementing two
separate algorithms, one of which will be used only on a memory
failure (and will therefore not be tested as thoroughly as the
primary algorithm)?
Only occasionally, in my experience. Sometimes I do this if I
don't want to introduce an error path in a stack of function
calls that doesn't have one and which shouldn't have one.

One example is a merge sort (that requires O(n) extra storage)
that falls back to another sort algorithm if storage is not
available.
--
"...deficie nt support can be a virtue.
It keeps the amateurs off."
--Bjarne Stroustrup
Aug 1 '06 #37
On Tue, 01 Aug 2006 22:27:24 GMT, Keith Thompson <ks***@mib.or g>
wrote:
>Richard Heathfield <in*****@invali d.invalidwrites :
>Keith Thompson said:
[...]
>>Do programmers really go to the effort of implementing two
separate algorithms, one of which will be used only on a memory
failure

Yes, sometimes, if the situation warrants it. Often, it won't, but often !=
always. Normally, the goal is to fail gracefully without losing user data.
But some programs Must Not Fail - e.g. safety-critical stuff.

In that context, I would think it would *usually* make more sense to
use just the slower and more robust algorithm in the first place.

Possibly the faster and more memory-intensive method would be
necessary to meet deadlines, and if you run out of memory you can fall
back to the slower method and continue running in a degraded mode.
But then again, safety-critical real-time code tends not to use
dynamic memory allocation at all.

I think that, 99% of the time, the best response to an allocation
failure is to clean up and abort. (The tendency to omit the "clean
up" portion is regrettable; the tendency to omit the "abort" portion,
i.e., to ignore the error, is even more so.) That other 1% requires
the other 99% of the effort.
I've worked on lots of safety-critical (and money-critical) systems,
and I'd emphasize the word "system." The program is only a part of the
system, albeit a very important part. Sometimes the best thing the
program can do is get out of the way and let the failsafes take over.
Sometimes, in fact, the computer itself gets out of the way, for
example if it detects an unrecoverable memory fault and can no longer
depend on it's own calculations.

This is a big subject, and 99% off-topic <G>.

--
Al Balmer
Sun City, AZ
Aug 1 '06 #38
Skarmander wrote:
jacob navia wrote:
>Keith Thompson a écrit :
>>Personally, I don't use debuggers very often, so it usually wouldn't
occur to me to distort my code to make it easier to use in a debugger.


Ahhh... You do not use debuggers very often?

Mmmm... Well, they are great tools. I use them very often,
actually I spend most of the time either in the editor
or in the debugger.
That's a common but by no means universal approach to coding.
Personally, I find that coding as if my system had no debuggers results
in code for which I don't need a debugger, and this is why I don't use
debuggers often.

I think that this style of coding is overall more speedy than the one
where you do assume debuggers are readily available for assistance, but
I have no hard data on it, and doubtlessly it will vary by individual.
>The only time when I did not use a debugger was when I was writing
the debugger for lcc-win32. My debugger wasn't then able
to debug itself so I had to develop it without any help, what made
things considerably more difficult...
Well, you could have used a different debugger... Or, like compilers
bootstrapping themselves, you could have used a rudimentary but easily
eye-proofed version of your debugger to develop the rest with.

S.
The only time I used a debugger in 35 years of
programming, was when an assembler decoding
subroutine failed to cross a 64K barrier properly.
(I did not write that one.).
To me, it smells like "let the system catch my errors"
while you should try to avoid them in the first place.
Sloppy typing can cause errors which are not
found by your compiler/debugger.
Aug 1 '06 #39
Skarmander said:
Richard Heathfield wrote:
<snip>
>>
Perhaps I should have made it clearer that I'm referring to programs that
are intended to be used over and over and over by lotsa lotsa people. If
it's not worth the programmer's time to write the code robustly, it's not
worth my time to use his program. Of course, it may be worth /his/ time
to use his own program.
You're using "robust" as if it's a yes or no property. You're free to
storm off in a huff shouting "well that's not robust then!" if someone
presents you with a program that can deal with just about anything except
memory exhaustion when it finally comes round the bend, but that doesn't
mean everyone would or should.
<grinNo, there's no harumphing going on over here. But basic resource
acquisition checking is as fundamental to robustness as steering so as not
to bump the kerb is to driving.
>>I'm not saying the tradeoffs involved are always correctly assessed
(it's probably a given that the cost of failure is usually
underestimate d), but I do believe they exist.

Yes, the cost of failure can be a lost customer, repeated N times.
Don't sell it short. It could be a DEAD CUSTOMER, repeated N times.
Absolutely. And a dead customer is a lost customer, right?
Or it could be a single entry in a log somewhere.
Or not even that. We're back to undefined behaviour.

>>>* break down the memory requirement into two or more sub-blocks.
Applicable only if the failure is a result of trying to allocate more
memory than you really need in one transaction, which is either a flaw
or an inappropriate optimization (or both, depending on your point of
view).

No, you've misunderstood. I'm thinking about situations where you would
ideally like a contiguous block of N bytes, but would be able to manage
with B blocks of (N + B - 1) / B bytes (or, more generally, a bunch of
blocks that between them total N bytes). The allocator might find that
easier to manage.
I see what you're getting at; the allocator may be unable to satisfy your
request due to fragmentation and the like. This doesn't work very well as
a strategy to recover, however; you're looking at writing a program that
allocates either a contiguous region of bytes or (if that should happen
not to work) finds some way to deal with a bunch of regions.
Done it. It wasn't pretty, but it is certainly possible sometimes. Not all
the time, I grant you. (If there were one-size-fits-all, we'd all know
about it and probably many of us would be using it.)

<snip>
All this will not make the program more *robust*, however, it'll simply
allow it to make better use of existing resources. ("Simply" should
nevertheless not be taken lightly.) Robustness is given by how well the
system behaves when the resources run dry, not how well it squeezes water
from the remaining stones when there's a drought in sight.
If the objective is to gather enough moisture to survive until the rescue
helicopter arrives - that is, if the objective is to complete the immediate
task so that a consistent set of user data can be saved before you bomb out
- then it's a sensible approach.

<snip>
>Consider reading a complete line into memory. It's good practice to
extend the buffer by a multiple (typically 1.5 or 2) whenever you run out
of RAM. So - let's say you've got a buffer of 8192 bytes, and you've
filled it but you still haven't finished reading the line. So you try to
realloc to 16384, but realloc says no. Well, nothing in the rules says
you're necessarily going to need all that extra RAM, so it might be worth
trying to realloc to 8192 + 4096 = 12288 or something like that instead.
Thanks, that explanation was a bit more meaningful than "use less memory".
:-)
I was trying not to rewrite the book, okay? :-)
>>>* point to a fixed-length buffer instead (and remember not to free it!)
Thread unsafe (unavoidably breaches modularity by aliasing a global, if
you like it more general), increased potential for buffer overflow,
requires checking for an exceptional construct in the corresponding
free() wrapper (I'm assuming we'd wrap this).

Who says it has to be global? It would never have occurred to me to use a
file scope object for such a purpose.
So N fixed-length buffers, then? Obviously N can't be dynamic, or you're
back to square one.
I was assuming just the one, actually, local to the function where the
storage is needed.

<snip>

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 2 '06 #40

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

16
2643
by: lawrence k | last post by:
I've made it habit to check all returns in my code, and usually, on most projects, I'll have an error function that reports error messages to some central location. I recently worked on a project where someone suggested to me I was spending too much time writing error messages, and that I was therefore missing the benefit of using a scripting language. The idea, apparently, is that the PHP interpreter writes all the error messages that are...
0
9933
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
11125
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10734
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10836
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
7114
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5794
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5982
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4607
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
4211
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.