473,899 Members | 4,308 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Seriously struggling with C

RG
Greetings friends,

This semester I have started a course in C programming. I was moving
along fine until I reached to the topic of loops (feeling embarrassed
among you elite programmers). My prof. would post program questions
and the solutions online. For practice I would try to do the problems.
I would reach to a certain point in the code, for example as far as
error trapping, but when the loop arrives, like knowing whether to use
for, while do, how to properly use the increment and decrements, and
counters,I am just not proficient in it and the class is moving ahead.
Eventually i would have to look at the solution and wondering to
myself, the reason i could not think of it. What ticks me off is that
other kids are getting this stuff easily, while I am having a hard
time.Kindly advise me on what actions I shoul take. I would
particularly like to have an idea of the thought process to engage in
when given the programme to write.
Thanks for your time and consideration.

RG

Feb 20 '06
160 4815
Dik T. Winter wrote:
Again not received the original, that is why I respond to this.

In article <dt**********@m alatesta.hpl.hp .com> Chris Dollin <ke**@hpl.hp.co m> writes:
> Richard G. Riley wrote:
> > In addition printfs dont give you watchpoints, breakpoints etc. I cant
> > even believe we are having this discussion to be honest, although I
> > suppose you're not necessarily defending it : just that it can be done
> > - on that we are agreed.

>
> Oh, I /am/ defending it. In my (limited) C experience, I have not
> needed to resort frequently to a C debugger. I would expect to need
> one less nowadays than I used to, as well.


What I am missing here is that using breakpoints debugging code can be
*more* time consuming than using printf's to give the state. I once
had to debug a program I had written (80k+ lines of code). On some
machine it did not work. It appeared that on the umpteenth occurrence
of some call to some routine something was wrong. It is impossible
to detect such using breakpoints or watchpoints. Using proper printf's
and scrutinising the output will get you much faster the answer to why
it did not work.


Personally I find debuggers extremely useful under *some* conditions. In
other situations, I find printf (or a more sophisticated logging system)
far more useful.

One true, but extreme, example where a debugger and ICE combination were
invaluable was when trying to find what was causing all units to crash
when taken out of storage and powered up. Every unit crashed in about
the same place in its power up tests, all wrote garbage over the
display, and generally gave all the symptoms of the processor having run
off in to the wild blue yonder for some reason. I had examined the code
(which I did not write) on a few occasions trying to find any possible
reason for the crashes. I could find none. After many attempts at
playing with various break condition I eventually caught the problem. I
could see quite clearly in the trace that just before it all went to pot
the processor had read a *different* instruction than the ROM actually
contained. Before anyone convinces me that debuggers are of no use (most
have said limited use) they will have to explain to me how I could have
found that and proved it to anyone else *without* the use of the debugger.

Before anyone says ah, but that is a once in a lifetime situation, I've
also managed to catch other "impossible " crashes in debuggers and
demonstrate to people that it was actually the hardware doing something
screwy.

I've also used logic analysers in the same way I might use a debugger to
see what a program is doing where I had no way of capturing realistic
input data. The code was actually implementing a control loop, so the
input for one loop depended on the output of the previous loop *and* the
outside world. Capturing selective data with some very clever triggers
(as complex as you can use with many debuggers) I could then use the
information to work out how the algorithm was failing. I actually used
this method on at least three different algorithms on the same system,
and also used it to prove to the HW engineers yet again when the
hardware was faulty.

A bigger use for debuggers for me is when we have built a beta test or
production version of the software (a lot of which is not written by me)
and someone doing testing can easily crash the software but I can't (or
a customer has crashed it when coming in to do testing for us) and I
attach the debugger to examine what state they have got the program in
to. Sometimes the call stack is sufficient to point in the right
direction, sometimes examining the states of variable provides a big
insight, often I just pass the information on to another developer who
then examines the code and finds the problem.

Sometimes I use a debugger to break the code at specific points and see
what the state is because I am too lazy to add in the printf statements
and rebuild.

However, I am gradually extending the logging throughout the code in a
way that can easily be enabled at runtime (by setting an environment
variable) and as I extend it to cover more of the functionality of the
program I am gradually finding it of more and more use.

So my position is both tools have there usage, and which you use more
will depend on a lot of things outside your control, such as the quality
of the HW, the quality of code written by others, the variability of
external inputs, how reproducible problems are etc.

I almost forgot, another time when a debugger was invaluable was whith a
highly complex processor where I had thought a particular combination of
options on an assembler instruction was valid, the assembler accepted
it, but stepping through in the debugger because I could not see how the
code was failing I saw that the disassembly showed a roll where I
specified a shift. Not C, but a use of a debugger worked where other
tools had failed and neither myself nor another software developer could
see anything wrong with the code. On this code we really were after
every clock cycle we could get and it was sometimes worth the half hour
it took to work out if a particularly complex instruction was allowed.
--
Flash Gordon
Living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidlines and intro -
http://clc-wiki.net/wiki/Intro_to_clc
Feb 23 '06 #121
On Thu, 23 Feb 2006 11:07:29 UTC, "Dik T. Winter" <Di********@cwi .nl>
wrote:
In article <46************ @individual.net > "Richard G. Riley" <rg***********@ gmail.com> writes:
> On 2006-02-23, Dik T. Winter <Di********@cwi .nl> wrote:

...
> > machine it did not work. It appeared that on the umpteenth occurrence
> > of some call to some routine something was wrong. It is impossible
> > to detect such using breakpoints or watchpoints. Using proper
> > printf's

>
> This is simply not true. Since you must have some idea where the
> problem is to insert the "printf" then you have some idea where to set
> your breakpoint to detect "naughty data" : then you can do a stack
> trace to see where this data originated.


Also when the problem occurs on something like the millionth call to some
routine?


Yes. A conditional breakpoint does the trick. Let the call pass
undebugged 999.999 times and trace the 1,000.000th run through.

A debugger is a nice tool to catch bugs of nearly all kinds but you
have to use your brain to use it right.

You may fidget around your code for weeks with the debugger when you
can't get a plan how to get the point failing its job. You will debug
only minutes through some hundret millions lines of recursive code
when you knows what you needs. Using a debugger needs to know how to
use that tool right and having a plan how to reach the corner the
cause for the bug sits. Then inspect the code and data until you sees
what goes why wrong. Often enough you can path the data while editing
the source and go on to the next flaw.

A debug session of less than 30 minutes will avoid days of
implementing debug printf, running the program, revert the prits to
other datas, endless recompiles to get the output you needs, only to
see that the data you prints out at least absolutely helpless and
restart the cycles of edit, compile, only to fail again and again.

When you have learned how to use your debeg effectively you would fire
up your debuggee under control of the debug, set conditional and
unconditional break- and watchpoints and then start the run to get a
picture of the variable on the critical points, tracing throu the
suspicious statements, running over uncritical functions until you
have it.

Then, when all bugs seems to be fixed you will use other tools for
automated regression tests, falling back to the debug when ever there
is a need for until all test conditions are flawless completed and the
application is ready for either public beta or GA.

Having code inspections of any kind will not preserve you from having
bugs in the code, even as they will reduce them significantly if the
inspectors are excellent programmers.

--
Tschau/Bye
Herbert

Visit http://www.ecomstation.de the home of german eComStation
eComStation 1.2 Deutsch ist da!
Feb 23 '06 #122
Micah Cowan wrote:
Test-driven development.
That last is the one I use nowadays. But even in my sloppy youth,
using a debugger - watchpoints and breakpoints and manual stepping
and all - didn't seem to be helpful to me. But, because I haven't
had a proper C project for a while, I can't
speak to developing C using TDD - I expect that to change this year,
depending on how much home-time I can free up.

It does work well. You have to bend a few rules, but I've run a couple
of successful TDD C projects.

I've only recently started using this model as much as possible, and
it has turned out to be quite helpful for me.

Good, spread the word!
However, I personally still use a debugger quite a lot. If for no
other reason, I frequently like to reassure my paranoid self that the
unit-test itself is actually testing what I think it is. As a
side-benefit, stepping through the code this way (which doesn't
technically require the aid of a debugger) can sometimes reveal
further unit tests that I have neglected to write.

This is typical of someone starting out in TDD, I was exactly the same.
Over time your tests will improve and you will learn to trust the
process and your tests.

--
Ian Collins.
Feb 23 '06 #123
Chris Hills wrote:
In article <Iv********@cwi .nl>, Dik T. Winter <Di********@cwi .nl> writes
In article <46************ @individual.net > "Richard G. Riley"
<rg***********@ gmail.com> writes:
On 2006-02-23, Dik T. Winter <Di********@cwi .nl> wrote: ...
machine it did not work. It appeared that on the umpteenth occurrence
of some call to some routine something was wrong. It is impossible
to detect such using breakpoints or watchpoints. Using proper
printf's
This is simply not true. Since you must have some idea where the
problem is to insert the "printf" then you have some idea where to set
your breakpoint to detect "naughty data" : then you can do a stack
trace to see where this data originated.

Also when the problem occurs on something like the millionth call to some
routine?


This is where and ICE is essential Let it run the system using filters
and watch points Also most good ones will let you out timing constraints
and bands on Also conditional break points with actions etc It's
virtually the only way to find this sort of problem (assuming you have
used static analysis to get rid of the silly stuff first)


OK, I'll just open up the PC and hook up and ICE. Now, I wonder what
location Linux will load the application at this time...
However printf is just about the worst thing you can use in this case as
it changes the memory map and the timing


The rest of what the server is doing (in my case these days) also
changes the timings. For example, if the 201st user is running a complex
report it will slow down the daemon that is making a SOAP request. As to
the memory map, these days I have a logging system (which wraps up
printf calls) which allows me to enable debug logging when required even
on a production build with no change to the SW and so minimal change to
the memory map. Disabling the optimiser so the debugger is more useful
sometimes prevents the program from crashing so using a debugger can be
*harder* than using printf statements in at least some situations.

With one recent problem, adding in calls to the debug logging framework
to more sections of the code allowed me to eliminate one of the two
programmes as the cause of timing related problem. I could see quite
clearly from the log that this program was doing the correct thingss in
the right order. I then went through using more printfs (well, calls to
a similar debugging system implemented in Java) and it showed exactly
what the bug was that allowed things to go wrong when the timing was
exactly wrong.

On an embedded system I worked on, on the other hand, being able to
break all 20 processors approximately simultaneously allowed me to
examine the state of various parts of the system (often only looking at
a few of the 20 debuggers that were running in synchronisation ) and then
resume running without having had things get seriously out of step. This
was invaluable. Sometimes on this system I also literally had to sit
down and count clock cycles on printouts to work out what the timing
relationship would be.
--
Flash Gordon
Living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidlines and intro -
http://clc-wiki.net/wiki/Intro_to_clc
Feb 23 '06 #124
Ian Collins wrote:
Richard G. Riley wrote:


<snip>
The best way for a programmer and more importantly, his customer, be
fully comfortable with your code is to have a complete set of
automated tests. Nothing else gives you the confidence to refactor
code.


I disagree on this I must say. I find automated tests tend to give a
false sense of security. They do contribute of course : but are often
seen as an indicator of infallability. SOmetimes betetr to stick a
monkey at the keyboard with a BIG hammer!!!! :)

The sense isn't false if the tests are good and written first. In my
opinion, test added after the code is written are second rate.


On one of the projects I worked on early in my career (written in
Pascal) I don't think any of us ever used the debugger and we did not
write the tests until after we wrote the code, and even then the tests
were system level. However, when we did write the tests we went a long
way out of our way to try to think of every single way we could break
the system. Running out software with some of the kit switched off,
unplugging cables whilst it was running, swapping 525 line cards in to
what was meant to be a 625 line system (we had a 525 line variant with
the same code base) etc. During that testing we found a significant
number of bugs. During the remaining 10 years of my time in the company,
through many versions of the SW, including getting fresh graduates to no
experience or domain knowledge to do changes, the customers found very
few bugs. However, rerunning these manual system level tests after
changes *did* find problems, and each time the customer found a bug we
extended our tests to catch the bug.

So tests written after the code can be *very* effective, but you have to
actively *try* to break the code in your testing rather than trying to
prove that it is correct.

I've seen far more problems with testing written either by an
independent team or where the tests have been designed before the coding
where people have been trying to prove the code correct than I have with
tests written after the fact with people actively trying to prove the
code is wrong. Of course, the ideal would probably be to write the test
first but to write them as an active attempt to prove the software *wrong*.

I still stand by the opinion that in some situation debuggers are
invaluable, even though in this project we didn't use them and generally
did not even need debugging output apart from a debug output on crash.
Highly partitioned SW and destructive testing proved more than sufficient.
--
Flash Gordon
Living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidlines and intro -
http://clc-wiki.net/wiki/Intro_to_clc
Feb 23 '06 #125
Flash Gordon wrote:
The sense isn't false if the tests are good and written first. In my
opinion, test added after the code is written are second rate.

On one of the projects I worked on early in my career (written in
Pascal) I don't think any of us ever used the debugger and we did not
write the tests until after we wrote the code, and even then the tests
were system level. However, when we did write the tests we went a long
way out of our way to try to think of every single way we could break
the system. Running out software with some of the kit switched off,
unplugging cables whilst it was running, swapping 525 line cards in to
what was meant to be a 625 line system (we had a 525 line variant with
the same code base) etc. During that testing we found a significant
number of bugs. During the remaining 10 years of my time in the company,
through many versions of the SW, including getting fresh graduates to no
experience or domain knowledge to do changes, the customers found very
few bugs. However, rerunning these manual system level tests after
changes *did* find problems, and each time the customer found a bug we
extended our tests to catch the bug.

What you are describing are what I'd call acceptance tests, working on
the code form the outside confirming that the system behaves as expected
by the customer. Unit tests written as part of the TDD process test
individual components of the system, down to individual function level
form the inside. They test the logic according to the programmer's
understanding.

I'd always recommend both types of testing.
So tests written after the code can be *very* effective, but you have to
actively *try* to break the code in your testing rather than trying to
prove that it is correct.
Indeed. But not unit tests written after the code.
I've seen far more problems with testing written either by an
independent team or where the tests have been designed before the coding
where people have been trying to prove the code correct than I have with
tests written after the fact with people actively trying to prove the
code is wrong. Of course, the ideal would probably be to write the test
first but to write them as an active attempt to prove the software *wrong*.

When doing TDD, every unit test fails until the code to make it pass is
written.

--
Ian Collins.
Feb 24 '06 #126
Herbert Rosenau wrote:
On Thu, 23 Feb 2006 11:07:29 UTC, "Dik T. Winter" <Di********@cwi .nl>
wrote:
In article <46************ @individual.net > "Richard G. Riley" <rg***********@ gmail.com> writes:
> On 2006-02-23, Dik T. Winter <Di********@cwi .nl> wrote:

...
> > machine it did not work. It appeared that on the umpteenth occurrence
> > of some call to some routine something was wrong. It is impossible
> > to detect such using breakpoints or watchpoints. Using proper
> > printf's
>
> This is simply not true. Since you must have some idea where the
> problem is to insert the "printf" then you have some idea where to set
> your breakpoint to detect "naughty data" : then you can do a stack
> trace to see where this data originated.


Also when the problem occurs on something like the millionth call to some
routine?


Yes. A conditional breakpoint does the trick. Let the call pass
undebugged 999.999 times and trace the 1,000.000th run through.

A debugger is a nice tool to catch bugs of nearly all kinds but you
have to use your brain to use it right.


<snip>

Oh look, my debugger broke on the file open failing. <fx: looks at file
system> Odd, the file is there and with correct permissions. <fx: Looks
at nicely timestamped logs from system here and system 200 miles away>
Ah, I see, that server 200 miles away is taking longer than the
specified time to put the file on my server.

There are times when debug logs can be *far* easier to use than a
debugger. Not always in my opinion, but they do exist.

Or another situation that really has happened to me. Two different
programs one in Java the other in C are interacting in an incorrect
manner. However, the system only fails more than once every few weeks
when under heavy load at customer sites in the week before month end
when they are running hundreds of invoices through the system hourly. Do
I ask my customer if they will let me know each time they start a new
session so I can attach a debugger to it (I can't wait the minimum
likely time of weeks minimum it would take for me to replicate the
problem once and don't know *exactly* what the customers 50 users in
that department are doing to cause it) or do I spend an hour putting in
some debug logging and ask them to run with that build for a bit?

I chose putting in some debug logging. I had a significant piece of the
puzzle a couple of days later (they could not install the debug build
immediately) and the next day I had the solution. Far less work than
using a debugger would have been.

It is quite common for those reporting bugs to miss out some critical
piece of information in what they are doing. The addition of easily
enabled logging is gradually making it far easier for us to see exactly
what the customer is doing to cause the failure, something a debugger
can never do.
--
Flash Gordon
Living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidlines and intro -
http://clc-wiki.net/wiki/Intro_to_clc
Feb 24 '06 #127
Ian Collins wrote:
Flash Gordon wrote:

The sense isn't false if the tests are good and written first. In my
opinion, test added after the code is written are second rate.

On one of the projects I worked on early in my career (written in
Pascal) I don't think any of us ever used the debugger and we did not
write the tests until after we wrote the code, and even then the tests
were system level. However, when we did write the tests we went a long
way out of our way to try to think of every single way we could break
the system. Running out software with some of the kit switched off,
unplugging cables whilst it was running, swapping 525 line cards in to
what was meant to be a 625 line system (we had a 525 line variant with
the same code base) etc. During that testing we found a significant
number of bugs. During the remaining 10 years of my time in the
company, through many versions of the SW, including getting fresh
graduates to no experience or domain knowledge to do changes, the
customers found very few bugs. However, rerunning these manual system
level tests after changes *did* find problems, and each time the
customer found a bug we extended our tests to catch the bug.

What you are describing are what I'd call acceptance tests, working on
the code form the outside confirming that the system behaves as expected
by the customer.


No, we were not trying to confirm it behaved as expected. Quite the
reverse, we were being as devious as possible in trying to prove it did
*not* work. We succeeded in that. Our customer acceptance tests, on the
other hand, where designed to demonstrate it worked as expected and took
far less time.
Unit tests written as part of the TDD process test
individual components of the system, down to individual function level
form the inside. They test the logic according to the programmer's
understanding.
We did tests at the system level designed to exercise the specific
functions, and sometimes specific if statements or exception trap.
I'd always recommend both types of testing.


As would I.
So tests written after the code can be *very* effective, but you have
to actively *try* to break the code in your testing rather than trying
to prove that it is correct.

Indeed. But not unit tests written after the code.


In this instance they proved highly effective. In terms of customer
reported bugs and customer perception of the quality of the code, it is
probably about the most successful production project I have come
across. 50000 lines of code and I think under 10 customer reported bugs
in 10 years. Even if I am a factor of 10 out that is still only 1 bug
per 500 lines of code over a 10 year period, or 1 but per 5000 lines per
year.
I've seen far more problems with testing written either by an
independent team or where the tests have been designed before the
coding where people have been trying to prove the code correct than I
have with tests written after the fact with people actively trying to
prove the code is wrong. Of course, the ideal would probably be to
write the test first but to write them as an active attempt to prove
the software *wrong*.

When doing TDD, every unit test fails until the code to make it pass is
written.


That could be said of any test. If the test checks if 5 is returned when
the input is 3, then until you have put something inside the function
body of course it will fail.

When we were testing this code we tried to get the code to fail by
having devices absent that should always be there, intermittent
communications over what we knew were definitely reliable links, trying
to force it to do division by zero (faking it so that it missed the
reference frequency in a frequency response test) and so on. This is
something you can apply to unit testing, system level testing, or any
other form of testing, but it is also something that in my experience
many people do *not* do whatever form of testing they are doing.

I'm not disputing the benefits of TDD, nor saying that today I would do
things the same way we did in 1990 in the Test Engineering Department
(making test equipment) where I used to work. I'm saying that:
1) Methods other than TDD can be successful in the right situation
2) Whatever testing you are doing the tests should be designed to prove
in every conceivable way *and* in inconceivable ways that the code is
*wrong*.

Part of 2 is testing boundary conditions, part is forgetting what the
requirements on external systems are and what is possible for them (You
know that a user can't press a key 1000 time a second, don't you. Forget
that because you have forgotten that a HW fault could have the same
effect as a user pressing the key 1000 time a second) and part is
working on the assumption that you know damn well there is a bug
somewhere that you couldn't possibly conceive, so you need to test the
inconceivable.

BTW, I've seen a HW fault causing the same effect as a user pressing a
key at a stupidly high rate. The logic circuit basically became an
oscillator for as long as a key was held down, so I know damn well that
what most would consider inconceivable is not only possible
theoretically, but sometimes actually happens in real life.

I also spent time as I say working in the Test Engineering Department,
and since we were building test equipment (which also had to test
itself) we developed the attitude of assuming that the SW has to survive
and continue working properly as much as possible even if fundamental
parts of the system are failing in ways you can't conceive of, a
philosophy that I find very useful.
--
Flash Gordon
Living in interesting times.
Web site - http://home.flash-gordon.me.uk/
comp.lang.c posting guidlines and intro -
http://clc-wiki.net/wiki/Intro_to_clc
Feb 24 '06 #128
In article <46************ @individual.net > "Richard G. Riley" <rg****@gmail.c om> writes:
On 2006-02-23, Dik T. Winter <Di********@cwi .nl> wrote: ....
This is simply not true. Since you must have some idea where the
problem is to insert the "printf" then you have some idea where to set
your breakpoint to detect "naughty data" : then you can do a stack
trace to see where this data originated.


Also when the problem occurs on something like the millionth call to some
routine?


Of course. Thats why breakpoints exist. I would do the following -

1) Isolate as close as possible to where/why bug is happening.


And, how do you exactly do that? Once I have isolated the problem as
close as possible, I know fairly fast what the problem is. And it does
not matter whether it is a compiler bug, a hardware bug, or a program
bug. I have encountered all three.

But one of my programs in its initial version gave that there were no
solutions to the problem. However, I did know that, mathematically,
solutions should exist, I only wrote the program to determine what the
actual solutions were. Where to start?

So, what I did do was insert printf's that did show whether the input was
properly stored. Next I removed the previous printf's and inserted printf's
that did show that the input data was actually interpreted correctly in all
cases. Going on this way I could determine the problem with only three
runs of the program. (Somewhere an increment was inside a loop rather
than outside the loop. The net effect was that many cases were missing,
and amongst them the cases that should provide solutions.)
Whayt debuggers have you used? Maybe they were not suitable for the
job in hand?


Oh well. I have used source code debuggers and instruction code debuggers
on many occasions, on more platforms than you can imagine. I disremember
all their names.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Feb 24 '06 #129
Ben Pfaff wrote:
.... snip ...
I doubt that anyone here is trying to say that a debugger cannot
be useful for finding bugs. I personally would take the position
that a debugger is a tool that *can* be used for finding bugs.
It is more of a personal preference whether the debugger *should*
be the first avenue of attack for hunting a bug. For me,
personally, it isn't; for you, I can see that it is.


This interminable thread was launched when one luser troll (since
plonked for refusal to stay on topic) insisted that code should be
formatted to ease debugger use, and I responded with the fact that
I have not used a debugger in anger for years. The troll then made
nasty noises and became even more objectionable. It has since been
seen here only in quotes.

--
"If you want to post a followup via groups.google.c om, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell. org/google/>
Also see <http://www.safalra.com/special/googlegroupsrep ly/>
Feb 24 '06 #130

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

22
2103
by: Stan | last post by:
Hey everyone, I've got a computer science class and we're working on C++. I am struggling with nested loops and have a really simple assignment, but I can't get it to run the loop. I need to run a loop that has someone guess at a number 5 times and if they get 45, it tells them how many guesses they needed. It also has to tell them if they leave the range of guesses from 0 to 100. I need help. Can somebody help me please!!!
4
2011
by: Rowan | last post by:
Hi there, it's me again. I am having trouble with a view again. I am trying to do a calculation, but there are some checks that need to be taken into consideration. Maybe a view is not the right way to deal with this. I don't know. This is the beginning of my query. SELECT coalesce(f.filenumber, i.filenumber) as filenumber, i.InvoiceNumber, i.InvoiceValue, il.lineid, MPF = .21 * (il.UnitCost * il.UnitQty + il.AddMMV - il.MinusMMV -...
26
6774
by: dagger | last post by:
Hi there. I'm using C under FreeBSD with the gcc compiler and am having a bit of trouble using the calloc and realloc calls. As an example the code snippet: #include <stdio.h> int main() { char *ptr;
4
5627
by: Angus Comber | last post by:
Hello I have received a lot of help on my little project here. Many thanks. I have a struct with a string and a long member. I have worked out how to qsort the struct on both members. I can do a bsearch on the long member (nKey) but I am struggling to do a search using the string member. The code I am running appears below. It doesn't crash or anything. It is just that when I do the last bsearch using "192.168.1.3" I SHOULD find...
5
4068
by: | last post by:
I am really struggling to conceptually understand classes in C# especially the use of the strange keywords 'static' 'void' and 'override'.... Is there a 'idiots' way of understanding these concepts broadly before even touching a line of code.... Thanks Jason
2
1282
by: mjeaves | last post by:
Hello there, Hope someone can help. I have data arriving in a table, a Cron job is triggered when the data arrives. ---- table_1 id (unique)
97
5591
by: Master Programmer | last post by:
An friend insider told me that VB is to be killled off within 18 months. I guess this makes sence now that C# is here. I believe it and am actualy surprised they ever even included it in VS 2003 in the first place. Anyone else heard about this development? The Master
0
1051
by: Steve | last post by:
I'm struggling to get proper runtime and designtime dpi values. In the forms .designer.vb I've added the following code after InitializeComponent() Me.AutoScaleDimensions = New System.Drawing.SizeF(96.0F, 96.0F) Me.AutoScaleMode = Windows.Forms.AutoScaleMode.Dpi And in my code behind the form, I've added,
11
22911
by: briankind | last post by:
Hello i have these radio buttons and i wish to have the corresponding textboxes to be visible when the radiobutton is selected. any Help? snippets. thanks thanks in adv Bry
0
9843
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
11272
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10863
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10971
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10494
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9666
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
6081
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
4300
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
3317
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.