473,395 Members | 1,766 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,395 software developers and data experts.

c# 2004 (Whidbey)

Just finished reading about upcoming features in 2004
version of VS.NET and C# in particular
http://msdn.microsoft.com/vstudio/pr.../roadmap.aspx#
whidbey
Looks good and I am very happy (about generics, split
types...), Except 1 thing
Microsoft is reintroducing the "Edit and continue" future
for VB.NET ONLY????
The "Edit and Continue" functionality in debugger is NOT
about simplicity, but about having a modern and powerful
debugger(tester) which saves a LOT of time in testing the
code(I remember in good old times with vb I would write
only the first line of the method, then put it into
debugging mode and continue until it was all finished), it
was available in C++.NET, it's included in ALL
modern Java IDEs(JBuilder..), and now it's back to VB.NET,
by that making the Visual C#.NET the only modern and
popular language with half debugger??
Please someone from Microsoft tell me that I am wrong.
If it can be done for VB and C++, then I will not by any
explanation, why it can't be done for c#
Thank you

Nov 15 '05 #1
48 2426
Sure, "Edit and Continue" can be implemented in C# as well as in VB. VB
users demand it as they were accustomed to it in VB6, but according to MSFT,
C# users are more ambivalent about it. A minority want it, a minority think
it's a Bad Thing, and most don't really care. Personally I fall into the
"don't care" camp with suspicions that it could be a Bad Thing, although I'm
trying to remain open-minded about it. I guess my thinking is, I've gotten
along fine without it for 20 years, and besides, I think it's healthy and
useful to work code out in your head and via "desk-checking" before you ever
execute it. Good debugging practice says you step through every line of
code anyway at some point before release.

Now, I'm fully aware that I'm generalizing. A tool isn't necessarily bad
just because it's capable of being misused or because it helps the witless
get farther along before they go down in flames. And as much as long-time
VB developers miss the feature, perhaps there's something to it. I just
haven't seen a compelling need for it in my own work, nor have I ever had a
desire to execute code line by line as I write it.

Anyway, bottom line, from what I've read, MSFT is saying publicly if enough
C# developers demand edit-and-continue, they'll implement it -- but it's not
trivial to implement, and they don't want to expend resources on it when C#
developers seem to be clamboring for other things.

--Bob

"Arman Oganesian" <ao********@msn.com> wrote in message
news:05****************************@phx.gbl...
Just finished reading about upcoming features in 2004
version of VS.NET and C# in particular
http://msdn.microsoft.com/vstudio/pr.../roadmap.aspx#
whidbey
Looks good and I am very happy (about generics, split
types...), Except 1 thing
Microsoft is reintroducing the "Edit and continue" future
for VB.NET ONLY????
The "Edit and Continue" functionality in debugger is NOT
about simplicity, but about having a modern and powerful
debugger(tester) which saves a LOT of time in testing the
code(I remember in good old times with vb I would write
only the first line of the method, then put it into
debugging mode and continue until it was all finished), it
was available in C++.NET, it's included in ALL
modern Java IDEs(JBuilder..), and now it's back to VB.NET,
by that making the Visual C#.NET the only modern and
popular language with half debugger??
Please someone from Microsoft tell me that I am wrong.
If it can be done for VB and C++, then I will not by any
explanation, why it can't be done for c#
Thank you

Nov 15 '05 #2

"Bob Grommes" <bo*@bobgrommes.com> wrote in message
news:%2****************@TK2MSFTNGP12.phx.gbl...
VB developers miss the feature, perhaps there's something to it. I just
haven't seen a compelling need for it in my own work, nor have I ever had a desire to execute code line by line as I write it.


I don't think that's what is meant by "debug and continue." D&C is the
ability to break on an error, change the code and have the debugger start up
again from the same point in the program, with all variables, etc. in the
same state as before the break.

Colin
Nov 15 '05 #3
I absolutely agree with you that one should workout the app
in the head before start coding, in fact for a last year
and a half I've been using the Rational XDE with VS.NET
and I've got used to it so match that even if I need to
create 4-5 classes, first I will do it with UML, then
generate the stub, and then continue(In fact another
missed capability from VS.NET, when you compare XDE with
Visio, then Visio is just a joke, you can't do a serious
design with it, and since the Rational has become part of
the IBM's software group, for which destroying VS.NET is
the job # 1, I wonder what is Microsoft is going to do
about it, since all good IDEs include some advanced case
tool capabilities, even new C#Builder will come with it
based on Together Control Center, and now imagine if IBM
drops XDE from VS.NET?, I will be the first one to move to
C#Builder)
But back to "Edit and continue", it is not anymore an
exclusive VB feature, yes it was pioneered in the VB but
now C# will be the only major tool without it, and it is
a huge productivity enhancement, just imagine you have a
Web app with some security already implemented, and you
can't just simply select the page you work on as a start
up page, it will send you to login first, and any time you
need to make a small fix to code you have to start all
over again, and many similar situations.
I think it is a lame excuse for Microsoft say that if not
to many developers demanding it then we will not include
it, many C# developers are not familiar with it or as you
don't really care about it because did not use it before,
the Java people didn't ask for it too, but
It is included in all major Java IDEs for a last year and
a half, because it's a great productivity enhancer and
saves a lot of time, and now when C# and Java are going
neck to neck, whoever makes better tools may attract more
people.
I know it's not easy thing to do, in Java for example I
believe the technology behind it called "Hot Swap",
basically being able to replace piece of compiled code on
the fly,
I would think that they did something similar forVB.NET
(MSIL?)so most of it should be already done, and believe
me once you start using it and saving the time, you will
agree that any modern Language/IDE/Debugger should not be
released without it
Arman
-----Original Message-----
Sure, "Edit and Continue" can be implemented in C# as well as in VB. VBusers demand it as they were accustomed to it in VB6, but according to MSFT,C# users are more ambivalent about it. A minority want it, a minority thinkit's a Bad Thing, and most don't really care. Personally I fall into the"don't care" camp with suspicions that it could be a Bad Thing, although I'mtrying to remain open-minded about it. I guess my thinking is, I've gottenalong fine without it for 20 years, and besides, I think it's healthy anduseful to work code out in your head and via "desk- checking" before you everexecute it. Good debugging practice says you step through every line ofcode anyway at some point before release.

Now, I'm fully aware that I'm generalizing. A tool isn't necessarily badjust because it's capable of being misused or because it helps the witlessget farther along before they go down in flames. And as much as long-timeVB developers miss the feature, perhaps there's something to it. I justhaven't seen a compelling need for it in my own work, nor have I ever had adesire to execute code line by line as I write it.

Anyway, bottom line, from what I've read, MSFT is saying publicly if enoughC# developers demand edit-and-continue, they'll implement it -- but it's nottrivial to implement, and they don't want to expend resources on it when C#developers seem to be clamboring for other things.

--Bob

"Arman Oganesian" <ao********@msn.com> wrote in message
news:05****************************@phx.gbl...
Just finished reading about upcoming features in 2004
version of VS.NET and C# in particular
http://msdn.microsoft.com/vstudio/pr.../roadmap.aspx# whidbey
Looks good and I am very happy (about generics, split
types...), Except 1 thing
Microsoft is reintroducing the "Edit and continue" future for VB.NET ONLY????
The "Edit and Continue" functionality in debugger is NOT
about simplicity, but about having a modern and powerful
debugger(tester) which saves a LOT of time in testing the code(I remember in good old times with vb I would write
only the first line of the method, then put it into
debugging mode and continue until it was all finished), it was available in C++.NET, it's included in ALL
modern Java IDEs(JBuilder..), and now it's back to VB.NET, by that making the Visual C#.NET the only modern and
popular language with half debugger??
Please someone from Microsoft tell me that I am wrong.
If it can be done for VB and C++, then I will not by any
explanation, why it can't be done for c#
Thank you

.

Nov 15 '05 #4
"Arman Oganesian" <ao********@msn.com> wrote in message
news:08****************************@phx.gbl...

Hi Arman,
Very good points !
(In fact another
missed capability from VS.NET, when you compare XDE with
Visio, then Visio is just a joke, you can't do a serious
design with it,
IMHO you don't have to compare Visio with anything to decide that is a joke
(regarding UML integration capabilities)<g>
and since the Rational has become part of
the IBM's software group, for which destroying VS.NET is
the job # 1, I wonder what is Microsoft is going to do
about it, since all good IDEs include some advanced case
tool capabilities,
http://www.eweek.com/article2/0,3959,1202588,00.asp
I think it is a lame excuse for Microsoft say that if not
to many developers demanding it then we will not include
it,
Yes, i was puzzled too, unbelievable statement
I know it's not easy thing to do, in Java for example I
believe the technology behind it called "Hot Swap",
basically being able to replace piece of compiled code on
the fly,
some Java IDE(s) (cant remember which) already have reverse execution, (that
may require an IL interpeter) but to see this in VS we may have to wait too
long (is not even at MS research labs)
I would think that they did something similar forVB.NET
(MSIL?)so most of it should be already done, and believe
me once you start using it and saving the time, you will
agree that any modern Language/IDE/Debugger should not be
released without it


Amen

Nov 15 '05 #5
read this thread in here about what C# needs :-)
"Arman Oganesian" <ao********@msn.com> wrote in message
news:05****************************@phx.gbl...
Just finished reading about upcoming features in 2004
version of VS.NET and C# in particular
http://msdn.microsoft.com/vstudio/pr.../roadmap.aspx#
whidbey
Looks good and I am very happy (about generics, split
types...), Except 1 thing
Microsoft is reintroducing the "Edit and continue" future
for VB.NET ONLY????
The "Edit and Continue" functionality in debugger is NOT
about simplicity, but about having a modern and powerful
debugger(tester) which saves a LOT of time in testing the
code(I remember in good old times with vb I would write
only the first line of the method, then put it into
debugging mode and continue until it was all finished), it
was available in C++.NET, it's included in ALL
modern Java IDEs(JBuilder..), and now it's back to VB.NET,
by that making the Visual C#.NET the only modern and
popular language with half debugger??
Please someone from Microsoft tell me that I am wrong.
If it can be done for VB and C++, then I will not by any
explanation, why it can't be done for c#
Thank you

Nov 15 '05 #6
Edit and Continue will get implemented in C# VS.NET when Borland decides to
implement it first in there new ide. Then you will see microsoft scramble to
put it in despite the junk they are talking now. It's all about competition.
And the better product.

"Arman Oganesian" <ao********@msn.com> wrote in message
news:05****************************@phx.gbl...
Just finished reading about upcoming features in 2004
version of VS.NET and C# in particular
http://msdn.microsoft.com/vstudio/pr.../roadmap.aspx#
whidbey
Looks good and I am very happy (about generics, split
types...), Except 1 thing
Microsoft is reintroducing the "Edit and continue" future
for VB.NET ONLY????
The "Edit and Continue" functionality in debugger is NOT
about simplicity, but about having a modern and powerful
debugger(tester) which saves a LOT of time in testing the
code(I remember in good old times with vb I would write
only the first line of the method, then put it into
debugging mode and continue until it was all finished), it
was available in C++.NET, it's included in ALL
modern Java IDEs(JBuilder..), and now it's back to VB.NET,
by that making the Visual C#.NET the only modern and
popular language with half debugger??
Please someone from Microsoft tell me that I am wrong.
If it can be done for VB and C++, then I will not by any
explanation, why it can't be done for c#
Thank you

Nov 15 '05 #7
Niall,
you are joking right? say it aint so. What company do you work for? what
software do they produce? i hope it aint airplane software.

"Niall" <as**@me.com> wrote in message
news:uW**************@TK2MSFTNGP09.phx.gbl...

"Bob Grommes" <bo*@bobgrommes.com> wrote in message
news:%2****************@TK2MSFTNGP12.phx.gbl...
Sure, "Edit and Continue" can be implemented in C# as well as in VB. VB
users demand it as they were accustomed to it in VB6, but according to MSFT,
C# users are more ambivalent about it. A minority want it, a minority

think
it's a Bad Thing, and most don't really care. Personally I fall into the "don't care" camp with suspicions that it could be a Bad Thing, although

I'm
trying to remain open-minded about it. I guess my thinking is, I've

gotten
along fine without it for 20 years, and besides, I think it's healthy and useful to work code out in your head and via "desk-checking" before you

ever
execute it. Good debugging practice says you step through every line of
code anyway at some point before release.


I never bother to step through my functions unless I happen to find a bug.

I think this is what unit tests are for. Why make the effort to step through
your function manually each time you change it, which might be a few times
between start of development and end? Instead of that, you can use a unit
test to do the same thing for you, several times a day, and much faster.

Programming is the art of facilitating laziness :P

Niall

Nov 15 '05 #8
With E&C (and I have seen this happen...), you change something in function
foo that returns an integer. In doing so, other functions that rely on
function foo do not get updated and start returning the wrong values, so,
you break again and change the other functions to return the new values,
when low and behold, you break more functions.

Call me crazy, but I would rather start over from scratch and make sure
that if I change one function, it doesn't cascade up the line and break 10
others. with E&C, the compiler doesn't check to make sure that all uplevel
functions are receiving the correct value. That is the responsibility of
the developer and a lot of developers that I know don't take the time to
even learn what System.Diagnostics.Debug.Assert is or does... but that is
for another thread... ;)

Bill P.

On Wed, 6 Aug 2003 12:35:39 -0700, Arman Oganesian <ao********@msn.com>
wrote:
Just finished reading about upcoming features in 2004 version of VS.NET
and C# in particular
http://msdn.microsoft.com/vstudio/pr.../roadmap.aspx#
whidbey
Looks good and I am very happy (about generics, split types...), Except 1
thing
Microsoft is reintroducing the "Edit and continue" future
for VB.NET ONLY????
The "Edit and Continue" functionality in debugger is NOT about
simplicity, but about having a modern and powerful debugger(tester) which
saves a LOT of time in testing the code(I remember in good old times with
vb I would write only the first line of the method, then put it into
debugging mode and continue until it was all finished), it was available
in C++.NET, it's included in ALL
modern Java IDEs(JBuilder..), and now it's back to VB.NET, by that making
the Visual C#.NET the only modern and popular language with half
debugger??
Please someone from Microsoft tell me that I am wrong.
If it can be done for VB and C++, then I will not by any explanation, why
it can't be done for c#
Thank you


--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
Nov 15 '05 #9
it's good that you can clarify your position. but i bring you back to your
original post which says you don't bother to step thru code that you write.
Why make the effort to step
through
your function manually each time you change it,
instead you rely on unit tests. unless you know the code, the other guy in
the testing dept. cannot write a unit test to test every line of code you
write. unless you yourself are writing the unit tests, you cannot guarantee
that the unit tests will test your freshly written code.

unit tests are only as good as the effort you put into writing it. Unit
tests typically flag failures only. that is, did the button click do what it
was supposed to? Yes/No. It typically DOES NOT check to see if the routine
returned junk. I can count the number of times scripts were run against
newly written code which passed with flying colors but spat out nothing but
junk first thing in the morning. If you aint stepping thru your code, you
are taking chances.

unit tests or smoke screen tests are they are sometimes called are great but
they find most of their importance in regression testing. that's not what
you said. you said you don't verify the code you write. you rely on someone
else to write a unit test to test it for you. i won't be flying on your
airplane!

"Niall" <as**@me.com> wrote in message
news:uO**************@TK2MSFTNGP09.phx.gbl...
Stepping through your code and checking that the right results are produced is exactly what a unit test does. You might be confident that your function is solid after stepping through it once or twice. So what happens then if
the workings of a function you are calling changes? The person who writes
that function might step through it once to make sure it works as they now
intend. How do they magically know that your code, written previously,
wasn't expecting their function to work that way?

So now your code is not working as intended, and no one will know until you stumble over it while using the application, or you decide to do another
step through. If you never change the function again in the development of
the system, can you guarantee that you will come back around to check it? If you are releasing frequent updates of your software, can you guarantee that you will step through every line of code in the entire system before each
release to deal with possible changes in the code?

Can you guarantee that if someone changes a utility function, they will look at every place in the code that it is used and make sure that all those
routines work with the new results of their function? In a system as large
as ours (~40 developers), that's a massive amount of work, and the person
who's running around checking everyone else's code may not even know what
the other person's function should now do in the face of different
behaviour. So if they do decide to check someone else's code, they may not
even be able to check it properly in the first place.

So... you write a unit test for your function, which says exactly what it's supposed to do in each case of the use of the function. You run the unit
tests a hundred times a day, every single day of development. As soon as
something goes wrong, everyone knows about it. Moreover, writing the unit
test (before you write the function) makes you actually work out what you
want the function to do before you code it up.

In the end, the difference between unit testing and your approach is:

- Unit testing forces you to work out what you want before you write the
function, so you have an exact specification of input to output before any
coding of the function begins.

- Unit testing provides a visible contract for the function, so that any
passer by can work out how your function should be behaving. A suite of unit tests provides the contract for a section of the application or a class, and the entire collection provides the contracts for the entire application.

- A decent unit test will run in the order of a tenth of a second. Manually stepping through a function will take much longer than that.

- Unit tests can (and most definitely should be) run frequently. The running of them can be automated, such that the system is constantly being tested.
This allows instantaneous integration of changes. You can even check if
you'll break somebody else's code before you check yours in, ensuring that
the central version of the system remains bug free. How many times would you step through one function between the time it is first written and the time it is released to clients?

- Unit tests allow the idea of relinquishing code ownership, leading to
easier refactoring... by anyone. If I can change your code without your
tests breaking, then all should be fine. If it's not, the original author
did not write sufficient tests, and they did not write your tests first to
define the behaviour of the class/function.

If I get into a situation where there can be no unit test for the code I'm
writing, then I will step through it as you do and watch it very closely.
But code that cannot be tested is very often a sign of bad design.

In the end, is it not better to have a documentation of what your code
should be doing, alongside the code, that can be used as a test and run
quickly whenever anyone needs it, basically a shield against outside changes that would affect the correctness of your code?

As far as I can see, unit testing saves time developing, saves time testing, and saves time due to prevention of errors. So I say again that programming is the art of facilitating laziness :P

"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in
message news:uo**************@TK2MSFTNGP09.phx.gbl...
Niall,
you are joking right? say it aint so. What company do you work for? what
software do they produce? i hope it aint airplane software.

"Niall" <as**@me.com> wrote in message
news:uW**************@TK2MSFTNGP09.phx.gbl...

"Bob Grommes" <bo*@bobgrommes.com> wrote in message
news:%2****************@TK2MSFTNGP12.phx.gbl...
> Sure, "Edit and Continue" can be implemented in C# as well as in VB. VB > users demand it as they were accustomed to it in VB6, but according
to MSFT,
> C# users are more ambivalent about it. A minority want it, a minority think
> it's a Bad Thing, and most don't really care. Personally I fall into
the
> "don't care" camp with suspicions that it could be a Bad Thing, although I'm
> trying to remain open-minded about it. I guess my thinking is, I've
gotten
> along fine without it for 20 years, and besides, I think it's
healthy and
> useful to work code out in your head and via "desk-checking" before
you ever
> execute it. Good debugging practice says you step through every
line of > code anyway at some point before release.

I never bother to step through my functions unless I happen to find a bug.
I
think this is what unit tests are for. Why make the effort to step

through your function manually each time you change it, which might be a few times between start of development and end? Instead of that, you can use a unit test to do the same thing for you, several times a day, and much

faster.
Programming is the art of facilitating laziness :P

Niall



Nov 15 '05 #10
I talked a lot about writing the unit tests before you actually write the
code you need, whereas you are talking about writing the code and then
getting someone else to test it. Unit tests only properly function as a
contract and a proper test if they are written first. Otherwise, you are
taking a potshot at what you think should be tested, and you can miss
things.

So going back to where I mentioned writing the tests first, and using them
as a contract, then it's always the same coder that writes the test and the
code. It's very bad practice to write some code and then tell someone else
to write the test. The idea of unit testing is to get you to actually work
out what you want the code to do before you write it, and then document the
desired behaviour in a contract that can be run against the code to see if
the contract is actually fulfilled.

So if you don't write the test first, how do you know what you actually want
the code to do? And if you say "because I have already worked it out in my
head", then you should take what you've worked out in your head and put it
into a unit test. Then you start coding, and when the test passes, you are
finished.

Tests should be complete, not just "did it thrown an exception? No? Then it
worked fine". You can use coverage profilers at line level to ensure that
your tests are doing things like testing every possibility in an if
statement/case statement, actually getting into the bodies of loops, etc. If
you have code in your function which is not covered by your unit test, then
you have written some functionality into the function which you never worked
out a plan (read contract) for. So how do you know it's what you really want
if you never made a contract for it? If you do know exactly what it should
be doing, then that should be in the test.

It all comes back to this: everything in the function should be tested by
its unit test. If this isn't true, you have designed the function badly, and
there is the potential for errors. I never said I rely on someone else to
write my tests for the code I have written, because doing that shows that I
wrote my code without working out a clear list of requirements for it.

If I asked a coder for aeroplane software "How do you know your software
works properly", and they said "because I stepped through the code a few
times, and we have a checkin policy where a team leader is required to look
at the code and give it the ok before it's checked in", then I would be
worried. Code is a fluid, dynamic thing, simply stepping through it
occasionally only guarantees that it is working at that explicit point in
time. The only way such a developer could convince me that the software was
safe for me right now would be to step through every line of code at that
instant and show that it all works 100%. Because otherwise you have no idea
whether the function that was checked a year, a month, a week, a day, an
hour ago still works with the latest version of other areas of the code.

That's exactly what unit tests do for you, automatically, and thousands of
times faster. If a client asked us "How can you guarantee that your code is
working 100% right now?", then I would fire up the unit test form and hit
Run. We currently have around 6000 tests (this goes up around a hundred or
so a day at present), and this takes about 10 minutes. Could you desk check
your entire system in 10 minutes? And hence, could you have the entire
system being continually, repeatedly desk checked every 10 minutes, 24/7 so
that any errors are caught within 10 minutes?

Of course, it is possible that bad programmers write bad tests, just as it's
possible that programmers without unit tests forget/decide not to step
through their code, or when they do, they don't do it properly. But if you
are perseverent and vigilant, I think the benefits of unit testing are
important and valuable.

Niall

Hmm.. this was a little long :P

"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in
message news:em**************@TK2MSFTNGP09.phx.gbl...
it's good that you can clarify your position. but i bring you back to your
original post which says you don't bother to step thru code that you write.
Why make the effort to step
through
> your function manually each time you change it,
instead you rely on unit tests. unless you know the code, the other guy in
the testing dept. cannot write a unit test to test every line of code you
write. unless you yourself are writing the unit tests, you cannot guarantee that the unit tests will test your freshly written code.

unit tests are only as good as the effort you put into writing it. Unit
tests typically flag failures only. that is, did the button click do what it was supposed to? Yes/No. It typically DOES NOT check to see if the routine
returned junk. I can count the number of times scripts were run against
newly written code which passed with flying colors but spat out nothing but junk first thing in the morning. If you aint stepping thru your code, you
are taking chances.

unit tests or smoke screen tests are they are sometimes called are great but they find most of their importance in regression testing. that's not what
you said. you said you don't verify the code you write. you rely on someone else to write a unit test to test it for you. i won't be flying on your
airplane!

"Niall" <as**@me.com> wrote in message
news:uO**************@TK2MSFTNGP09.phx.gbl...
Stepping through your code and checking that the right results are produced
is exactly what a unit test does. You might be confident that your

function
is solid after stepping through it once or twice. So what happens then if the workings of a function you are calling changes? The person who writes that function might step through it once to make sure it works as they now intend. How do they magically know that your code, written previously,
wasn't expecting their function to work that way?

So now your code is not working as intended, and no one will know until

you
stumble over it while using the application, or you decide to do another
step through. If you never change the function again in the development of the system, can you guarantee that you will come back around to check it? If
you are releasing frequent updates of your software, can you guarantee that
you will step through every line of code in the entire system before

each release to deal with possible changes in the code?

Can you guarantee that if someone changes a utility function, they will

look
at every place in the code that it is used and make sure that all those
routines work with the new results of their function? In a system as large as ours (~40 developers), that's a massive amount of work, and the person who's running around checking everyone else's code may not even know what the other person's function should now do in the face of different
behaviour. So if they do decide to check someone else's code, they may not even be able to check it properly in the first place.

So... you write a unit test for your function, which says exactly what

it's
supposed to do in each case of the use of the function. You run the unit
tests a hundred times a day, every single day of development. As soon as
something goes wrong, everyone knows about it. Moreover, writing the unit test (before you write the function) makes you actually work out what you want the function to do before you code it up.

In the end, the difference between unit testing and your approach is:

- Unit testing forces you to work out what you want before you write the
function, so you have an exact specification of input to output before any coding of the function begins.

- Unit testing provides a visible contract for the function, so that any
passer by can work out how your function should be behaving. A suite of

unit
tests provides the contract for a section of the application or a class,

and
the entire collection provides the contracts for the entire application.

- A decent unit test will run in the order of a tenth of a second.

Manually
stepping through a function will take much longer than that.

- Unit tests can (and most definitely should be) run frequently. The

running
of them can be automated, such that the system is constantly being tested. This allows instantaneous integration of changes. You can even check if
you'll break somebody else's code before you check yours in, ensuring that the central version of the system remains bug free. How many times would

you
step through one function between the time it is first written and the

time
it is released to clients?

- Unit tests allow the idea of relinquishing code ownership, leading to
easier refactoring... by anyone. If I can change your code without your
tests breaking, then all should be fine. If it's not, the original author did not write sufficient tests, and they did not write your tests first to define the behaviour of the class/function.

If I get into a situation where there can be no unit test for the code I'm writing, then I will step through it as you do and watch it very closely. But code that cannot be tested is very often a sign of bad design.

In the end, is it not better to have a documentation of what your code
should be doing, alongside the code, that can be used as a test and run
quickly whenever anyone needs it, basically a shield against outside

changes
that would affect the correctness of your code?

As far as I can see, unit testing saves time developing, saves time

testing,
and saves time due to prevention of errors. So I say again that

programming
is the art of facilitating laziness :P

"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in message news:uo**************@TK2MSFTNGP09.phx.gbl...
Niall,
you are joking right? say it aint so. What company do you work for? what software do they produce? i hope it aint airplane software.

"Niall" <as**@me.com> wrote in message
news:uW**************@TK2MSFTNGP09.phx.gbl...
>
> "Bob Grommes" <bo*@bobgrommes.com> wrote in message
> news:%2****************@TK2MSFTNGP12.phx.gbl...
> > Sure, "Edit and Continue" can be implemented in C# as well as in VB.
VB
> > users demand it as they were accustomed to it in VB6, but
according to > MSFT,
> > C# users are more ambivalent about it. A minority want it, a minority > think
> > it's a Bad Thing, and most don't really care. Personally I fall into the
> > "don't care" camp with suspicions that it could be a Bad Thing,

although
> I'm
> > trying to remain open-minded about it. I guess my thinking is,
I've > gotten
> > along fine without it for 20 years, and besides, I think it's

healthy and
> > useful to work code out in your head and via "desk-checking"
before you
> ever
> > execute it. Good debugging practice says you step through every line
of
> > code anyway at some point before release.
>
> I never bother to step through my functions unless I happen to find

a bug.
I
> think this is what unit tests are for. Why make the effort to step

through
> your function manually each time you change it, which might be a few

times
> between start of development and end? Instead of that, you can use a

unit
> test to do the same thing for you, several times a day, and much

faster. >
> Programming is the art of facilitating laziness :P
>
> Niall
>
>



Nov 15 '05 #11
I have just re-read your reply. I'm not sure that you and I both have the
same idea of the meaning of "unit tests". What I am talking about are
functions that are written with the code that it is testing, which use your
object, call the function with the combinations of parameters that ensure
that all possible requirements are tested. You use something like an
assertion to check that what the function returns or does to the object's
state is what you are expecting. So you end up with 1 or more (usually
around 10 or so) assertions of expected behaviour for each test. Then you
can run that test whenever you want and see the result.

I'm not talking about user acceptance testing which goes like "When I click
this button, I should get a report", and someone will take the list of
expected behaviours and run through and go Pass/Fail on each test. I see
these as the contract between the customer and the developer. The unit tests
are the contract between the developer(s) and their code, at a much lower
level.

So you want a square root function. You write a test (before you write the
square root function) saying that when you call the function with 4, you
expect 2. When you call it with 9, you expect 3, when you call it with 2,
you expect 1.41etc. When you give it a negative number, you expect a
particular type of exception. Later on, if someone comes along to refactor
the code to make it faster, they make their change and run the test. If it
passes, all is good. If it doesn't, you see that the result you got from
sqrt(9) was 5, and something is looking a bit fishy :P

Basically unit tests, as I use the term, are used to design by contract,
such that if the code is not behaving according to the contract, then that
fact becomes known instantly. It's similar to the preconditions,
postconditions and invariants which Eiffel touts.

Niall

"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in
message news:em**************@TK2MSFTNGP09.phx.gbl...
it's good that you can clarify your position. but i bring you back to your
original post which says you don't bother to step thru code that you write.
Why make the effort to step
through
> your function manually each time you change it,
instead you rely on unit tests. unless you know the code, the other guy in
the testing dept. cannot write a unit test to test every line of code you
write. unless you yourself are writing the unit tests, you cannot guarantee that the unit tests will test your freshly written code.

unit tests are only as good as the effort you put into writing it. Unit
tests typically flag failures only. that is, did the button click do what it was supposed to? Yes/No. It typically DOES NOT check to see if the routine
returned junk. I can count the number of times scripts were run against
newly written code which passed with flying colors but spat out nothing but junk first thing in the morning. If you aint stepping thru your code, you
are taking chances.

unit tests or smoke screen tests are they are sometimes called are great but they find most of their importance in regression testing. that's not what
you said. you said you don't verify the code you write. you rely on someone else to write a unit test to test it for you. i won't be flying on your
airplane!

"Niall" <as**@me.com> wrote in message
news:uO**************@TK2MSFTNGP09.phx.gbl...
Stepping through your code and checking that the right results are produced
is exactly what a unit test does. You might be confident that your

function
is solid after stepping through it once or twice. So what happens then if the workings of a function you are calling changes? The person who writes that function might step through it once to make sure it works as they now intend. How do they magically know that your code, written previously,
wasn't expecting their function to work that way?

So now your code is not working as intended, and no one will know until

you
stumble over it while using the application, or you decide to do another
step through. If you never change the function again in the development of the system, can you guarantee that you will come back around to check it? If
you are releasing frequent updates of your software, can you guarantee that
you will step through every line of code in the entire system before

each release to deal with possible changes in the code?

Can you guarantee that if someone changes a utility function, they will

look
at every place in the code that it is used and make sure that all those
routines work with the new results of their function? In a system as large as ours (~40 developers), that's a massive amount of work, and the person who's running around checking everyone else's code may not even know what the other person's function should now do in the face of different
behaviour. So if they do decide to check someone else's code, they may not even be able to check it properly in the first place.

So... you write a unit test for your function, which says exactly what

it's
supposed to do in each case of the use of the function. You run the unit
tests a hundred times a day, every single day of development. As soon as
something goes wrong, everyone knows about it. Moreover, writing the unit test (before you write the function) makes you actually work out what you want the function to do before you code it up.

In the end, the difference between unit testing and your approach is:

- Unit testing forces you to work out what you want before you write the
function, so you have an exact specification of input to output before any coding of the function begins.

- Unit testing provides a visible contract for the function, so that any
passer by can work out how your function should be behaving. A suite of

unit
tests provides the contract for a section of the application or a class,

and
the entire collection provides the contracts for the entire application.

- A decent unit test will run in the order of a tenth of a second.

Manually
stepping through a function will take much longer than that.

- Unit tests can (and most definitely should be) run frequently. The

running
of them can be automated, such that the system is constantly being tested. This allows instantaneous integration of changes. You can even check if
you'll break somebody else's code before you check yours in, ensuring that the central version of the system remains bug free. How many times would

you
step through one function between the time it is first written and the

time
it is released to clients?

- Unit tests allow the idea of relinquishing code ownership, leading to
easier refactoring... by anyone. If I can change your code without your
tests breaking, then all should be fine. If it's not, the original author did not write sufficient tests, and they did not write your tests first to define the behaviour of the class/function.

If I get into a situation where there can be no unit test for the code I'm writing, then I will step through it as you do and watch it very closely. But code that cannot be tested is very often a sign of bad design.

In the end, is it not better to have a documentation of what your code
should be doing, alongside the code, that can be used as a test and run
quickly whenever anyone needs it, basically a shield against outside

changes
that would affect the correctness of your code?

As far as I can see, unit testing saves time developing, saves time

testing,
and saves time due to prevention of errors. So I say again that

programming
is the art of facilitating laziness :P

"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in message news:uo**************@TK2MSFTNGP09.phx.gbl...
Niall,
you are joking right? say it aint so. What company do you work for? what software do they produce? i hope it aint airplane software.

"Niall" <as**@me.com> wrote in message
news:uW**************@TK2MSFTNGP09.phx.gbl...
>
> "Bob Grommes" <bo*@bobgrommes.com> wrote in message
> news:%2****************@TK2MSFTNGP12.phx.gbl...
> > Sure, "Edit and Continue" can be implemented in C# as well as in VB.
VB
> > users demand it as they were accustomed to it in VB6, but
according to > MSFT,
> > C# users are more ambivalent about it. A minority want it, a minority > think
> > it's a Bad Thing, and most don't really care. Personally I fall into the
> > "don't care" camp with suspicions that it could be a Bad Thing,

although
> I'm
> > trying to remain open-minded about it. I guess my thinking is,
I've > gotten
> > along fine without it for 20 years, and besides, I think it's

healthy and
> > useful to work code out in your head and via "desk-checking"
before you
> ever
> > execute it. Good debugging practice says you step through every line
of
> > code anyway at some point before release.
>
> I never bother to step through my functions unless I happen to find

a bug.
I
> think this is what unit tests are for. Why make the effort to step

through
> your function manually each time you change it, which might be a few

times
> between start of development and end? Instead of that, you can use a

unit
> test to do the same thing for you, several times a day, and much

faster. >
> Programming is the art of facilitating laziness :P
>
> Niall
>
>



Nov 15 '05 #12
Alvin Bruney <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote:
it's good that you can clarify your position. but i bring you back to your
original post which says you don't bother to step thru code that you write.
Why make the effort to step
through
> your function manually each time you change it,

instead you rely on unit tests. unless you know the code, the other guy in
the testing dept. cannot write a unit test to test every line of code you
write.


Which "other guy" are you talking about?
unless you yourself are writing the unit tests, you cannot guarantee
that the unit tests will test your freshly written code.
Where do you get the idea that Niall *isn't* writing the unit tests
himself?
unit tests are only as good as the effort you put into writing it.
That's true of everything - but I'd rather put effort into unit testing
properly than into stepping through code properly once.
Unit
tests typically flag failures only. that is, did the button click do what it
was supposed to? Yes/No. It typically DOES NOT check to see if the routine
returned junk.
In that case you've seen very badly written unit tests. Note that "do
what it was supposed to" for me includes "it didn't return junk".
I can count the number of times scripts were run against
newly written code which passed with flying colors but spat out nothing but
junk first thing in the morning. If you aint stepping thru your code, you
are taking chances.
If you're stepping through your code once and then assuming it's okay
thereafter without automated tests, you're taking chances too. Spending
the same amount of time writing unit tests as stepping through the code
will usually prove to be far more valuable, in my experience.

Bear in mind that in order to step through your code in all possible
types of circumstances, you probably need to effectively write most of
the unit tests anyway. I remember writing a piece of code recently
where it took longer (and more lines of code) to write the unit tests
than for the code it was testing. However, those tests caught some bugs
which I probably *wouldn't* have caught just by stepping through the
code a few times, even if I'd stepped through every line.
unit tests or smoke screen tests are they are sometimes called are great but
they find most of their importance in regression testing. that's not what
you said. you said you don't verify the code you write. you rely on someone
else to write a unit test to test it for you.
Where did Niall say that someone *else* would be writing the unit
tests? Typically it's the person writing the code who writes the unit
tests in the first place.
i won't be flying on your airplane!


To be honest, and only in terms of this conversation, I'd rather fly on
Niall's than yours...

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #13
What you are describing is XP programming? isn't it? So we are on the same
track. I am yet to be convinced that this approach to programming is
profitable, all things considered. Here is a link.
http://www.extremeprogramming.org/. I think the gist of your argument was
you would rather unit test than step thru your code. My response was no,
that's wrong. you absolutely must step thru every line of code you write. no
excuses. sure go ahead and unit test all you want to, that's part of it as
well - a large part too, but you seemed to not want to step thru written
code.

In addition, i think you went way off course by thinking that if you changed
line 10,428 in source code that you needed to step thru all previous lines
of code. The assumption was you already stepped thru these lines of code
when you first wrote them, you now need to step thru line 10,428 which is
the line you just changed. Use unit tests to regression test the previous
lines of code.

"Niall" <as**@me.com> wrote in message
news:e0*************@TK2MSFTNGP12.phx.gbl...
I have just re-read your reply. I'm not sure that you and I both have the
same idea of the meaning of "unit tests". What I am talking about are
functions that are written with the code that it is testing, which use your object, call the function with the combinations of parameters that ensure
that all possible requirements are tested. You use something like an
assertion to check that what the function returns or does to the object's
state is what you are expecting. So you end up with 1 or more (usually
around 10 or so) assertions of expected behaviour for each test. Then you
can run that test whenever you want and see the result.

I'm not talking about user acceptance testing which goes like "When I click this button, I should get a report", and someone will take the list of
expected behaviours and run through and go Pass/Fail on each test. I see
these as the contract between the customer and the developer. The unit tests are the contract between the developer(s) and their code, at a much lower
level.

So you want a square root function. You write a test (before you write the
square root function) saying that when you call the function with 4, you
expect 2. When you call it with 9, you expect 3, when you call it with 2,
you expect 1.41etc. When you give it a negative number, you expect a
particular type of exception. Later on, if someone comes along to refactor
the code to make it faster, they make their change and run the test. If it
passes, all is good. If it doesn't, you see that the result you got from
sqrt(9) was 5, and something is looking a bit fishy :P

Basically unit tests, as I use the term, are used to design by contract,
such that if the code is not behaving according to the contract, then that
fact becomes known instantly. It's similar to the preconditions,
postconditions and invariants which Eiffel touts.

Niall

"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in
message news:em**************@TK2MSFTNGP09.phx.gbl...
it's good that you can clarify your position. but i bring you back to your
original post which says you don't bother to step thru code that you write.
Why make the effort to step
through
> > your function manually each time you change it,


instead you rely on unit tests. unless you know the code, the other guy in the testing dept. cannot write a unit test to test every line of code you write. unless you yourself are writing the unit tests, you cannot

guarantee
that the unit tests will test your freshly written code.

unit tests are only as good as the effort you put into writing it. Unit
tests typically flag failures only. that is, did the button click do what it
was supposed to? Yes/No. It typically DOES NOT check to see if the
routine returned junk. I can count the number of times scripts were run against
newly written code which passed with flying colors but spat out nothing

but
junk first thing in the morning. If you aint stepping thru your code, you are taking chances.

unit tests or smoke screen tests are they are sometimes called are great

but
they find most of their importance in regression testing. that's not what you said. you said you don't verify the code you write. you rely on

someone
else to write a unit test to test it for you. i won't be flying on your
airplane!

"Niall" <as**@me.com> wrote in message
news:uO**************@TK2MSFTNGP09.phx.gbl...
Stepping through your code and checking that the right results are

produced
is exactly what a unit test does. You might be confident that your

function
is solid after stepping through it once or twice. So what happens then if the workings of a function you are calling changes? The person who writes that function might step through it once to make sure it works as they now intend. How do they magically know that your code, written previously,
wasn't expecting their function to work that way?

So now your code is not working as intended, and no one will know until you
stumble over it while using the application, or you decide to do
another step through. If you never change the function again in the development of the system, can you guarantee that you will come back around to check it?
If
you are releasing frequent updates of your software, can you guarantee that
you will step through every line of code in the entire system before each release to deal with possible changes in the code?

Can you guarantee that if someone changes a utility function, they
will look
at every place in the code that it is used and make sure that all
those routines work with the new results of their function? In a system as

large as ours (~40 developers), that's a massive amount of work, and the person who's running around checking everyone else's code may not even know what the other person's function should now do in the face of different
behaviour. So if they do decide to check someone else's code, they may not even be able to check it properly in the first place.

So... you write a unit test for your function, which says exactly what

it's
supposed to do in each case of the use of the function. You run the unit tests a hundred times a day, every single day of development. As soon as something goes wrong, everyone knows about it. Moreover, writing the unit test (before you write the function) makes you actually work out what you want the function to do before you code it up.

In the end, the difference between unit testing and your approach is:

- Unit testing forces you to work out what you want before you write the function, so you have an exact specification of input to output before any coding of the function begins.

- Unit testing provides a visible contract for the function, so that any passer by can work out how your function should be behaving. A suite of unit
tests provides the contract for a section of the application or a
class,
and
the entire collection provides the contracts for the entire
application.
- A decent unit test will run in the order of a tenth of a second.

Manually
stepping through a function will take much longer than that.

- Unit tests can (and most definitely should be) run frequently. The

running
of them can be automated, such that the system is constantly being

tested. This allows instantaneous integration of changes. You can even check if you'll break somebody else's code before you check yours in, ensuring that the central version of the system remains bug free. How many times would you
step through one function between the time it is first written and the

time
it is released to clients?

- Unit tests allow the idea of relinquishing code ownership, leading
to easier refactoring... by anyone. If I can change your code without your tests breaking, then all should be fine. If it's not, the original

author did not write sufficient tests, and they did not write your tests first to
define the behaviour of the class/function.

If I get into a situation where there can be no unit test for the code I'm writing, then I will step through it as you do and watch it very closely. But code that cannot be tested is very often a sign of bad design.

In the end, is it not better to have a documentation of what your code
should be doing, alongside the code, that can be used as a test and
run quickly whenever anyone needs it, basically a shield against outside changes
that would affect the correctness of your code?

As far as I can see, unit testing saves time developing, saves time

testing,
and saves time due to prevention of errors. So I say again that

programming
is the art of facilitating laziness :P

"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote

in message news:uo**************@TK2MSFTNGP09.phx.gbl...
> Niall,
> you are joking right? say it aint so. What company do you work for? what > software do they produce? i hope it aint airplane software.
>
> "Niall" <as**@me.com> wrote in message
> news:uW**************@TK2MSFTNGP09.phx.gbl...
> >
> > "Bob Grommes" <bo*@bobgrommes.com> wrote in message
> > news:%2****************@TK2MSFTNGP12.phx.gbl...
> > > Sure, "Edit and Continue" can be implemented in C# as well as in VB. VB
> > > users demand it as they were accustomed to it in VB6, but according
to
> > MSFT,
> > > C# users are more ambivalent about it. A minority want it, a

minority
> > think
> > > it's a Bad Thing, and most don't really care. Personally I fall

into
> the
> > > "don't care" camp with suspicions that it could be a Bad Thing,
although
> > I'm
> > > trying to remain open-minded about it. I guess my thinking is,

I've > > gotten
> > > along fine without it for 20 years, and besides, I think it's

healthy
> and
> > > useful to work code out in your head and via "desk-checking" before you
> > ever
> > > execute it. Good debugging practice says you step through every

line
of
> > > code anyway at some point before release.
> >
> > I never bother to step through my functions unless I happen to find a
bug.
> I
> > think this is what unit tests are for. Why make the effort to step
through
> > your function manually each time you change it, which might be a

few times
> > between start of development and end? Instead of that, you can use a unit
> > test to do the same thing for you, several times a day, and much

faster.
> >
> > Programming is the art of facilitating laziness :P
> >
> > Niall
> >
> >
>
>



Nov 15 '05 #14

"Jon Skeet" <sk***@pobox.com> wrote in message
news:MP************************@news.microsoft.com ...
Alvin Bruney <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote:
it's good that you can clarify your position. but i bring you back to your original post which says you don't bother to step thru code that you write.
Why make the effort to step
through
> > your function manually each time you change it,
instead you rely on unit tests. unless you know the code, the other guy in the testing dept. cannot write a unit test to test every line of code you write.


Which "other guy" are you talking about?


I thought he meant someone in the testing dept wrote the unit tests. This is
what we did at the last company I worked at. And it was perfectly useless,
clumsy and required way too much co-ordination. It may work with 30 - 30
programmers like what Nial is saying, but not 400 as in the last company i
worked for.
unless you yourself are writing the unit tests, you cannot guarantee
that the unit tests will test your freshly written code.
Where do you get the idea that Niall *isn't* writing the unit tests
himself?
unit tests are only as good as the effort you put into writing it.


That's true of everything - but I'd rather put effort into unit testing
properly than into stepping through code properly once.


You missed the point. I didn't, or can't remember saying I wouldn't unit
test. In fact, I didn't say that! what i had a gripe with was Nial saying he
didn't step thru written code instead relying on unit tests to fish out
bugs. That is wrong and will always be wrong. Every line of written code
must be stepped thru in the debugger. There is no compromise on this. none
at all. You can unit test before after or during the process. I encourage
it.

Where did Niall say that someone *else* would be writing the unit
tests? Typically it's the person writing the code who writes the unit
tests in the first place.
No, that's not even the recommended way. In my case, unit tests were
prepared by the testing department. We were made to participate in a one day
seminar by some expert who taught that this was the recommended way to
proceed. Hence it was an assumption on my part based on my experience.
i won't be flying on your airplane!


To be honest, and only in terms of this conversation, I'd rather fly on
Niall's than yours...


to each his own. include me in your will.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too

Nov 15 '05 #15
I'm confused now.
What is the point of doing both unit tests and stepping thru a debugger then
(on corporate time that is)? What would one method uncover that the other
one won't since they are in fact doing the exact same thing - assuming that
you spent the day writing a unit test to examine every line of code (which
is feasible but very impractical). Seems like we all ought to go with Nial's
suggestion and forget about the debugger and just rely on unit tests to keep
the airplane from hitting the mountain.

Case in point. My function takes a date string examines it, returns a
formatted string or empty if the date is bogus. There are many ways to enter
bogus dates obviously. Would your unit test, generate every possible bogus
date? No? A subset of bogus dates maybe? Well you are flying blind hoping
that the date entered was a date your unit test actually used. Is it not
easier to flush out this bug by examining what the string looks like in a
debugger? It's now easy to tell how your function can handle it for every
possible case. Not the best case in point but sufficient to demonstrate what
I am thinking.

That mountain sure looks close to us now. wonder if the date string the
pilot entered was tested.

"Jon Skeet" <sk***@pobox.com> wrote in message
news:MP************************@news.microsoft.com ...
Alvin Bruney <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote:
That's true of everything - but I'd rather put effort into unit testing properly than into stepping through code properly once.


You missed the point. I didn't, or can't remember saying I wouldn't unit
test. In fact, I didn't say that! what i had a gripe with was Nial saying he
didn't step thru written code instead relying on unit tests to fish out
bugs. That is wrong and will always be wrong. Every line of written code
must be stepped thru in the debugger. There is no compromise on this. none at all. You can unit test before after or during the process. I encourage it.


But unit testing *does* (or at least should) execute all the code,
getting complete coverage. What does it matter whether or not that
happens in a debugger?
Where did Niall say that someone *else* would be writing the unit
tests? Typically it's the person writing the code who writes the unit
tests in the first place.


No, that's not even the recommended way.


It is from everyone I've heard recommending unit testing...
In my case, unit tests were prepared by the testing department.


And that sounds like a lousy idea to me - and you've said it didn't
work. In my experience, testing departments should be dealing much more
with system tests than with unit tests. Unit tests are on a much lower
level than I believe most testing departments should care about.
We were made to participate in a one day
seminar by some expert who taught that this was the recommended way to
proceed. Hence it was an assumption on my part based on my experience.


It sounds like your "expert" wasn't actually very expert at all. Just
because he recommended something that doesn't work doesn't mean that
things that *do* work aren't recommended by other people.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too

Nov 15 '05 #16
Alvin Bruney <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote:
I'm confused now.
What is the point of doing both unit tests and stepping thru a debugger then
(on corporate time that is)?
I generally only step through in a debugger when I don't understand
what's going on - eg if there's a bug.
What would one method uncover that the other
one won't since they are in fact doing the exact same thing - assuming that
you spent the day writing a unit test to examine every line of code (which
is feasible but very impractical).
In order to step through every line, you'd have to set up the
appropriate conditions in the first place. If you can set up those
conditions, unit test them - then you can do it repeatably.
Seems like we all ought to go with Nial's
suggestion and forget about the debugger and just rely on unit tests to keep
the airplane from hitting the mountain.
Forget about the debugger for stepping through code which you believe
to be correct. Keep it for code which isn't working how you expect it
to.
Case in point. My function takes a date string examines it, returns a
formatted string or empty if the date is bogus. There are many ways to enter
bogus dates obviously. Would your unit test, generate every possible bogus
date? No?
No, nor would you step through in a debugger for every possible bogus
date. However, you can still make sure your unit test covers every line
of code.
A subset of bogus dates maybe? Well you are flying blind hoping
that the date entered was a date your unit test actually used.
Likewise if you're stepping through the debugger.
Is it not
easier to flush out this bug by examining what the string looks like in a
debugger?
Only if you know in advance which string is actually a bug.
It's now easy to tell how your function can handle it for every
possible case. Not the best case in point but sufficient to demonstrate what
I am thinking.
I don't think it is, actually - you haven't really demonstrated a case
where stepping through in the debugger would have helped.
That mountain sure looks close to us now. wonder if the date string the
pilot entered was tested.


Of course, you haven't actually given any evidence that your code is
safer than Niall's.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #17

"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in
message news:%2****************@TK2MSFTNGP10.phx.gbl...
What you are describing is XP programming? isn't it? So we are on the same
track. I am yet to be convinced that this approach to programming is
profitable, all things considered. Here is a link.
http://www.extremeprogramming.org/. I think the gist of your argument was
you would rather unit test than step thru your code. My response was no,
that's wrong. you absolutely must step thru every line of code you write. no excuses. sure go ahead and unit test all you want to, that's part of it as
well - a large part too, but you seemed to not want to step thru written
code.

In addition, i think you went way off course by thinking that if you changed line 10,428 in source code that you needed to step thru all previous lines
of code. The assumption was you already stepped thru these lines of code
when you first wrote them, you now need to step thru line 10,428 which is
the line you just changed. Use unit tests to regression test the previous
lines of code.


In this case, you need unit testing as you say. If you didn't have unit
testing, you'd have no idea whether the rest of the system still works after
you change your code. The thing is... if you're prepared to use unit testing
for ensuring the safety of the whole of the system when central code is
changed, why are you not prepared to use it to ensure the safety of the code
that you're actually changing. Ideally, if there's a bug, you should write a
test which fails because of the bug, and then fix it.

If you do this, you get three benefits. Firstly, you can be certain you
fixed the bug, because if you didn't, the test would fail. Secondly, you
make sure that if someone comes along and changes the code such that the bug
returns, then the test will catch that. So you've fixed the bug for now, but
also ensured it remains fixed in future. Thirdly, you get a documentation of
the bug in the forms of a new requirement (which was previously missing), so
you further define the specifications of your code.

Niall
Nov 15 '05 #18

"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in
message news:ea**************@TK2MSFTNGP12.phx.gbl...
I thought he meant someone in the testing dept wrote the unit tests. This is what we did at the last company I worked at. And it was perfectly useless,
clumsy and required way too much co-ordination. It may work with 30 - 30
programmers like what Nial is saying, but not 400 as in the last company i
worked for.
....

Where did Niall say that someone *else* would be writing the unit
tests? Typically it's the person writing the code who writes the unit
tests in the first place.


No, that's not even the recommended way. In my case, unit tests were
prepared by the testing department. We were made to participate in a one

day seminar by some expert who taught that this was the recommended way to
proceed. Hence it was an assumption on my part based on my experience.


If this person giving the seminar was talking specifically about XP, then I
think he wasn't very expert, as Jon has said. The idea of unit testing in XP
seems to be that you use the tests to give you the requirements for your
code, helping you evolve the design, and you get automated testing for free.
The XP idea is that you are supposed to write the unit tests before you
write any of the actual code. They even go so far as to recommend you write
the test before it will even compile (because you haven't even declared the
actual function).

So... writing unit tests after you have written the code, and especially
getting someone who isn't involved in the coding of the original function to
write the tests afterward is a very bad thing. That person could have no
idea, or not enough of an idea of what you're doing, and hence not write
tests that fully cover your code. This is why the tests are more useful if
they come first: by writing the test, you then know what you're doing in the
function.

The only way I could see unit testing working when there is a coder (or
coders) and a tester who is completely uninvolved in the coding would be if
the tester is the one that knows what the function should be doing, and they
write the test for the coder as a specification before the coder begins
their work.

Niall
Nov 15 '05 #19
One final question. And this would be the final one.
You
actually
seriously
no jokingly
the-boss-is-not-over-your-shoulder-so-you-can-speak-freely

write unit tests before you start to write code?
write unit tests before you start to debug broken code?

if this is true you need a medal. plain and simple.
waiting for that serious reply.

PS. If it is any consolation Skeet doesn't do unit tests before he fires up
the debugger either. He says so on his website ;-)

"Niall" <as**@me.com> wrote in message
news:eO**************@TK2MSFTNGP12.phx.gbl...

"Jon Skeet" <sk***@pobox.com> wrote in message
news:MP************************@news.microsoft.com ...
Alvin Bruney <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote:
What would one method uncover that the other
one won't since they are in fact doing the exact same thing - assuming that you spent the day writing a unit test to examine every line of code (which is feasible but very impractical).
In order to step through every line, you'd have to set up the
appropriate conditions in the first place. If you can set up those
conditions, unit test them - then you can do it repeatably.


Exactly. If you're stepping through your code, and doing it thoroughly,
you'd have to know what you'd expect to happen , and you'd have to know

what kind of cases are likely to cause problems. Basically, every time you're
thinking to yourself something along the lines of "Right, if I set this to
<x>, and then let that line run, it should go into here, and set that to
<y>", you've just done exactly the same thing as you do when you write a
unit test. You set up the condition, cause the code execution to happen, and then make sure that the code executed with the expected result.

So if you know all these conditions and checks used to ensure correct
behaviour, why not write them down in code so that they are persistent. That way you can gain the benefits from having documented the requirements, as
well as repeatable testing that can be used by other people to prevent
errors.
As for the date example, I agree with what Jon has said. You can't test
every possible input in debugging and unit testing, but in both cases, you
can make sure that you cover all cases. So, hence, you run the same risks of missing cases with both approaches. You could actually cover more cases with the unit tests if you wanted to, because once you have written the test, it only takes a fraction of a second to run. So you could test ten dates, a
hundred dates, you could write a macro to generate assertions for testing a thousand dates, if you really wanted. It would take forever to do that while stepping through the code.
Is it not easier to flush out this bug by examining what the string
looks like in a debugger?

How come it's easier to look at some text in the watch window and see if it has the right value than to write an assertion that expects the value you
would have been looking for in the watch window?

With unit testing, you take the time to work through the function and get
down the expected behaviours once only. In both unit testing and stepping
through, you have to work out what behaviours you want to see. So it takes
you a little longer to take that and write it down into a unit test. Hence, on a pure time basis, if you only ever test a function once, it will
probably take you less time to step through it. As soon as you test it the
second time, the story is different. You have to re-step through everything manually with the debugger, whereas you can just click Run with the unit
tests.

This only talks about the time taken to run the tests. I find that unit
testing saves a lot of time in other places, such as design, because you are generating your code to match a laid down set of specifications, and you can always tell what specifications you are not properly meeting. It also saves a lot of time in code maintenance because anyone can come through to fix a
bug or add functionality and all the work you initially did to specify
exactly how your code should work is still there. So you guarantee that the next person does not overlook something that you spent the time to nut out.
I would agree with Jon when he says to leave the debugger for bugs. If you
write good unit tests, they can cover your code as well or better than
stepping through it manually. And you get all the benefits that come with
the tests as well...

Niall

Nov 15 '05 #20
"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in
message news:uS**************@TK2MSFTNGP12.phx.gbl...
write unit tests before you start to write code?
write unit tests before you start to debug broken code?


yes, i do always

special with GUI stuff, i have a test suite which does vectorization and
checks for proper painting, so i can catch bugs such as improper
painting of a listview items etc (one other does face recognition too)

now i am writing a database test suite, i can estimate that a single test
with real data should finish 10 years from now, but that's ok, i am admit
it,
i am test suite addicted

Regards

ps. i have a bug in my app with the cursor shapes (once the mouse enter in
some graph objects bounds does not change to hand-cursor but to
wait-cursor), have you got any idea how can i write a test suite for it ?
Nov 15 '05 #21
"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in
message news:uS**************@TK2MSFTNGP12.phx.gbl...
One final question. And this would be the final one.
You
actually
seriously
no jokingly
the-boss-is-not-over-your-shoulder-so-you-can-speak-freely

write unit tests before you start to write code?
write unit tests before you start to debug broken code?
I can't say that I do it all the time. I do it mostly, but not always. In
the cases where I don't write the test first, it's usually because what I'm
doing is like a "spike" as seen in XP. If you don't know what that means,
it's just when you have an idea about something you want to do, and you're
having a quick test of the code to see what kind of shape the implementation
would take.

So if my manager comes to me and says that something should work
differently, and it's not obvious how to implement that, I'll have a think,
come up with an idea, and try it out. If it looks like that idea is going to
be the one I want to use, then I write the test for it, then go back and
clean up/finish off the implementation. This kind of approach may not be
appealing to those who have a passionate dislike of Edit and Continue :P

The one area I find it difficult, and not overly beneficial to test is
highly GUI dependent work. It can be a real pain to actually get the
information you need about your GUI object to be able to tell if it's
working as intended or not. So what we aim for is forms that have as little
code in them as possible, quite often no extra code on top of the designer
generated region. Then you can test the object that's underneath the form (a
business object/collection of them) as much as you like. If I do have to
write some code in the form that's non trivial, and cannot be moved
elsewhere, I do my best to set up a test for it. This still leaves a slight
gap in the unit testing, but the thinner you make your forms, the thinner
the gap.
if this is true you need a medal. plain and simple.
waiting for that serious reply.

PS. If it is any consolation Skeet doesn't do unit tests before he fires up the debugger either. He says so on his website ;-)


I think unit tests are at their most beneficial when written first. However,
sometimes that's not how the mind (well, mine at least) works. Especially if
I'm making modifications to someone else's code, sometimes I want to have a
short mess around first to get a feel for how I want to do the work. But
before I get anything substantial done, I try to get a unit test down.

So my approach isn't perfect, but I doubt anyone's is :P

Niall
Nov 15 '05 #22
Alvin Bruney <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote:
One final question. And this would be the final one.
You
actually
seriously
no jokingly
the-boss-is-not-over-your-shoulder-so-you-can-speak-freely

write unit tests before you start to write code?
write unit tests before you start to debug broken code?

if this is true you need a medal. plain and simple.
waiting for that serious reply.

PS. If it is any consolation Skeet doesn't do unit tests before he fires up
the debugger either. He says so on his website ;-)


I rarely fire up the debugger at all. Usually when code is broken, I
fix it by adding a couple of trace elements or just by inspection.

I freely acknowledge that I don't use unit tests nearly as much as I
should. I wish I did. But then, do you really, seriously, no boss
involved, step through absolutely *every* line of code? Every single
error path, no matter how unlikely?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #23
"Niall" <as**@me.com> wrote in message
news:%2****************@TK2MSFTNGP09.phx.gbl...
So my approach isn't perfect, but I doubt anyone's is :P


Agreed,

read this
http://msdn.microsoft.com/library/en...html/DBGrm.asp

it might help in some cases
Nov 15 '05 #24
"Niall" <as**@me.com> wrote in message news:<Or*************@tk2msftngp13.phx.gbl>...
"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in
message news:%2****************@TK2MSFTNGP10.phx.gbl...
I think the gist of your argument was you would rather unit test than
step thru your code. My response was no, that's wrong. you absolutely
must step thru every line of code you write. <snip>


In this case, you need unit testing as you say. If you didn't have unit
testing, you'd have no idea whether the rest of the system still works after
you change your code. The thing is... if you're prepared to use unit testing
for ensuring the safety of the whole of the system when central code is
changed, why are you not prepared to use it to ensure the safety of the code
that you're actually changing. Ideally, if there's a bug, you should write a
test which fails because of the bug, and then fix it.


Just add to the unit test, or add to the unit test and walk through
the new code as the relevant part of the test is run as well?

I recall once when I just added to the unit test. It showed what I'd
done gave the correct result. Another developer insisted on walking
through the code as the test was run, which I thought was pointless,
since it produced the correct result. But walking through the code
revealed that there was a graph walk that was taking exponential time
because of bugs in my conditions for which nodes to recurse on.

Some programs have a lot going on that doesn't show up in the results,
but is visible when you step through the code.
Nov 15 '05 #25
good point
"Bob Jenkins" <bo*********@burtleburtle.net> wrote in message
news:a5**************************@posting.google.c om...
"Niall" <as**@me.com> wrote in message

news:<Or*************@tk2msftngp13.phx.gbl>...
"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in message news:%2****************@TK2MSFTNGP10.phx.gbl...
I think the gist of your argument was you would rather unit test than
step thru your code. My response was no, that's wrong. you absolutely
must step thru every line of code you write. <snip>


In this case, you need unit testing as you say. If you didn't have unit
testing, you'd have no idea whether the rest of the system still works after you change your code. The thing is... if you're prepared to use unit testing for ensuring the safety of the whole of the system when central code is
changed, why are you not prepared to use it to ensure the safety of the code that you're actually changing. Ideally, if there's a bug, you should write a test which fails because of the bug, and then fix it.


Just add to the unit test, or add to the unit test and walk through
the new code as the relevant part of the test is run as well?

I recall once when I just added to the unit test. It showed what I'd
done gave the correct result. Another developer insisted on walking
through the code as the test was run, which I thought was pointless,
since it produced the correct result. But walking through the code
revealed that there was a graph walk that was taking exponential time
because of bugs in my conditions for which nodes to recurse on.

Some programs have a lot going on that doesn't show up in the results,
but is visible when you step through the code.

Nov 15 '05 #26
Add to the unit test and run it to check that the result shows that the bug
has been located. In the case that I'm fixing a bug, I take the situation
that causes the bug and put it in a test, then run the test to make sure
that it fails, and hence is correctly trapping the bug. Then I step through
the function and work out why it's not working as intended. At this time,
you may become aware of more conditions in which this bug shows itself, and
hence you add to your unit test. Then you make your fix, and run the unit
test. If you're especially paranoid, you can step through the code again at
this point. But if you have written your unit test properly, there is no
gain to be made by stepping through the code. In fact, if you can pick up
bugs in stepping through your code that your test cannot pick up, then you
have not written a complete test.

If it's performance you're concerned about, do you walk all your code just
to see how it performs? You could walk through each line in a function, make
sure it's not looping one too many times or some such thing. But you will
have no idea whether your function is being called too many times by other
parts of the system unless you do a full code walkthrough.

When I code, I keep a general mind towards the performance of my code. But I
don't spend too much time trying to make sure it runs as fast as possible,
because it can lead to less clear code, and I can be wasting time making
code that runs in a negligable time run in even less negligable time. If I
notice something that is too slow, I will profile it. Every now and then, I
do general profiles on the system (both performance and memory usage) to see
what kind of a state we're in. If anything stands out, I have a look at it,
profile more specifically, and go from there.

I know this goes against the older style of coding of not "Can you get blood
from a stone?" but "How much more blood can you get from this stone if you
squeeze a little harder?", and no doubt a lot of people disagree with this
mindset, but... If the code you have written fully and correctly meets
business purposes (and you can prove this through passing unit tests), and
there is no noticeable lack of performance, why do you need to step through
your code? Stepping through code for performance checking can be insightful,
but if you don't know that it actually causes a performance problem for the
end user, it is often random, aimless and virtually fruitless.

Niall

"Bob Jenkins" <bo*********@burtleburtle.net> wrote in message
news:a5**************************@posting.google.c om...
"Niall" <as**@me.com> wrote in message

news:<Or*************@tk2msftngp13.phx.gbl>...
"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in message news:%2****************@TK2MSFTNGP10.phx.gbl...
I think the gist of your argument was you would rather unit test than
step thru your code. My response was no, that's wrong. you absolutely
must step thru every line of code you write. <snip>


In this case, you need unit testing as you say. If you didn't have unit
testing, you'd have no idea whether the rest of the system still works after you change your code. The thing is... if you're prepared to use unit testing for ensuring the safety of the whole of the system when central code is
changed, why are you not prepared to use it to ensure the safety of the code that you're actually changing. Ideally, if there's a bug, you should write a test which fails because of the bug, and then fix it.


Just add to the unit test, or add to the unit test and walk through
the new code as the relevant part of the test is run as well?

I recall once when I just added to the unit test. It showed what I'd
done gave the correct result. Another developer insisted on walking
through the code as the test was run, which I thought was pointless,
since it produced the correct result. But walking through the code
revealed that there was a graph walk that was taking exponential time
because of bugs in my conditions for which nodes to recurse on.

Some programs have a lot going on that doesn't show up in the results,
but is visible when you step through the code.

Nov 15 '05 #27
"Niall" <as**@me.com> wrote in message news:<u6**************@TK2MSFTNGP10.phx.gbl>...
I know this goes against the older style of coding of not "Can you get blood
from a stone?" but "How much more blood can you get from this stone if you
squeeze a little harder?", and no doubt a lot of people disagree with this
mindset, but... If the code you have written fully and correctly meets
business purposes (and you can prove this through passing unit tests), and
there is no noticeable lack of performance, why do you need to step through
your code? Stepping through code for performance checking can be insightful,
but if you don't know that it actually causes a performance problem for the
end user, it is often random, aimless and virtually fruitless.


Good point -- any testing method has to be monitored to see how useful
it is in the current situation.

In the situation I deal with (debugging other people's code in a large
old product), walking through the code finds bugs quite easily, easier
than I can find them by probing with testcases or just reading the
code. For me, walking through code is sort of a directed code review.
It catches things not so much because the code is going down the
wrong path, but because it focuses my attention on the active pieces
of the code and lets me verify variables are actually set. Expensive
things in inner loops, uninitialized variables, conditions that don't
make sense but happen to come out right for the current case,
realizing the current routine shouldn't be hit at all, noticing flags
aren't set the way I thought they were -- all that I catch by walking
through the code.

80% of the time I walk through my code after it's fixed my testcase,
and about 95% of the time I find something I did wrong, or was wrong
before my changes. I get nervous when I find everything went right
the first try.
Nov 15 '05 #28
I know what you mean. I'm not saying that the debugger is useless or a sin,
I just don't agree that it should be entrenched as "required practice" in
development, which is not what you're saying anyway (I think). Stepping
through the code can be helpful to see if you've designed your code badly,
but I don't think it's a requirement. If unit tests show that the program
does what it's supposed to, and its performance is acceptable, then you can
release it to the client without ever needing to step through the code.

But I agree that you can use the debugger to catch situations where code
could be improved. I don't have an attitude that "as long as it's working
and fast, then the code can be as bad/messy/whatever as you want", but at
the end of the day, such code/design tuning doesn't really bother the
client.

Incidentally, one thing I like to do every now and then to check up on
performance, is to run a profiler on the SQL Server to see all the queries
come in. You can quickly get a good idea for how much SQL you'll be
generating just by opening up some forms/doing some things. If you see the
same select query repeated in quick succession, you know you could probably
improve that. If I see that such a repeated query has a high duration, then
I go have a look.

Niall

"Bob Jenkins" <bo*********@burtleburtle.net> wrote in message
news:a5**************************@posting.google.c om...
"Niall" <as**@me.com> wrote in message news:<u6**************@TK2MSFTNGP10.phx.gbl>...
I know this goes against the older style of coding of not "Can you get blood from a stone?" but "How much more blood can you get from this stone if you squeeze a little harder?", and no doubt a lot of people disagree with this mindset, but... If the code you have written fully and correctly meets
business purposes (and you can prove this through passing unit tests), and there is no noticeable lack of performance, why do you need to step through your code? Stepping through code for performance checking can be insightful, but if you don't know that it actually causes a performance problem for the end user, it is often random, aimless and virtually fruitless.


Good point -- any testing method has to be monitored to see how useful
it is in the current situation.

In the situation I deal with (debugging other people's code in a large
old product), walking through the code finds bugs quite easily, easier
than I can find them by probing with testcases or just reading the
code. For me, walking through code is sort of a directed code review.
It catches things not so much because the code is going down the
wrong path, but because it focuses my attention on the active pieces
of the code and lets me verify variables are actually set. Expensive
things in inner loops, uninitialized variables, conditions that don't
make sense but happen to come out right for the current case,
realizing the current routine shouldn't be hit at all, noticing flags
aren't set the way I thought they were -- all that I catch by walking
through the code.

80% of the time I walk through my code after it's fixed my testcase,
and about 95% of the time I find something I did wrong, or was wrong
before my changes. I get nervous when I find everything went right
the first try.

Nov 15 '05 #29
Hi Niall,
If unit tests show that the program does what it's supposed to, and its performance is acceptable, then you can release it to the client without
ever needing to step through the code. <<

There are so many things wrong with this attitude that I don't know where to
start debugging your thought processes!

Unit tests can only test your current thinking about what your code should
be doing. Most developers can virtually guarantee that the first ideas about
what their code should be doing is faulty. One of the major tools for
correcting your ideas is to walk through your code in a debugger to watch
the code flow and data flow. Relying on unit tests is just intellectual
laziness, a prop for faulty thinking.

In addition, code coverage tools used to verify your unit tests are not very
useful. They can't identify what the code should be doing, or what it isn't
doing. But developers tend to use these tools to say "But there can't be a
bug: my unit tests are infallible and my code coverage tool verified that
all paths have been executed".

In my company, developers are required to walk through all of their code. If
I find a developer has missed a bug that would have been caught by walking
through the code, that developer is in trouble. If the same developer
continues to miss bugs through not walking through code, she/he is out of
the door. So far, this has only happened once - worked wonderfully for
"encourager les autres" :)

Regards,

Mark
--
Author of "Comprehensive VB .NET Debugging"
http://www.apress.com/book/bookDisplay.html?bID=128
"Niall" <as**@me.com> wrote in message
news:Ot**************@tk2msftngp13.phx.gbl...
I know what you mean. I'm not saying that the debugger is useless or a sin,
I just don't agree that it should be entrenched as "required practice" in
development, which is not what you're saying anyway (I think). Stepping
through the code can be helpful to see if you've designed your code badly,
but I don't think it's a requirement. If unit tests show that the program
does what it's supposed to, and its performance is acceptable, then you can
release it to the client without ever needing to step through the code.

But I agree that you can use the debugger to catch situations where code
could be improved. I don't have an attitude that "as long as it's working
and fast, then the code can be as bad/messy/whatever as you want", but at
the end of the day, such code/design tuning doesn't really bother the
client.

Incidentally, one thing I like to do every now and then to check up on
performance, is to run a profiler on the SQL Server to see all the queries
come in. You can quickly get a good idea for how much SQL you'll be
generating just by opening up some forms/doing some things. If you see the
same select query repeated in quick succession, you know you could probably
improve that. If I see that such a repeated query has a high duration, then
I go have a look.

Niall

"Bob Jenkins" <bo*********@burtleburtle.net> wrote in message
news:a5**************************@posting.google.c om...
"Niall" <as**@me.com> wrote in message news:<u6**************@TK2MSFTNGP10.phx.gbl>...
I know this goes against the older style of coding of not "Can you get

blood from a stone?" but "How much more blood can you get from this stone if you squeeze a little harder?", and no doubt a lot of people disagree with this mindset, but... If the code you have written fully and correctly meets
business purposes (and you can prove this through passing unit tests), and there is no noticeable lack of performance, why do you need to step through your code? Stepping through code for performance checking can be insightful, but if you don't know that it actually causes a performance problem for the end user, it is often random, aimless and virtually fruitless.


Good point -- any testing method has to be monitored to see how useful
it is in the current situation.

In the situation I deal with (debugging other people's code in a large
old product), walking through the code finds bugs quite easily, easier
than I can find them by probing with testcases or just reading the
code. For me, walking through code is sort of a directed code review.
It catches things not so much because the code is going down the
wrong path, but because it focuses my attention on the active pieces
of the code and lets me verify variables are actually set. Expensive
things in inner loops, uninitialized variables, conditions that don't
make sense but happen to come out right for the current case,
realizing the current routine shouldn't be hit at all, noticing flags
aren't set the way I thought they were -- all that I catch by walking
through the code.

80% of the time I walk through my code after it's fixed my testcase,
and about 95% of the time I find something I did wrong, or was wrong
before my changes. I get nervous when I find everything went right
the first try.


Nov 15 '05 #30
Hi Jon,

I quite agree that there is no silver bullet. Unit tests on their own,
whether manual or automated, won't find all of your bugs. Nor will stepping
through your code, or performing code inspections. Each of these techniques
on their own will find a specific set of bugs, but only by combining all of
them together can you have some confidence in your code.

My criticism is that Niall's attitude puts all the emphasis on unit tests
backed by code coverage, and this simply doesn't work in isolation.

Here are a couple of papers that I found useful in clarifying my thinking
when doing research for my book. The first looks at subtleties in using code
coverage tools, the second looks at problems with unit test automation.

http://www.testing.com/writings/coverage.pdf
http://www.satisfice.com/articles/te..._snake_oil.pdf

Mark
--
Author of "Comprehensive VB .NET Debugging"
http://www.apress.com/book/bookDisplay.html?bID=128
"Jon Skeet" <sk***@pobox.com> wrote in message
news:MP************************@news.microsoft.com ...
Mark Pearce <ev**@bay.com> wrote:
In addition, code coverage tools used to verify your unit tests are not very useful. They can't identify what the code should be doing, or what it isn't doing. But developers tend to use these tools to say "But there can't be a
bug: my unit tests are infallible and my code coverage tool verified that
all paths have been executed".


This is a straw man. Clearly anyone who, when presented with a problem
says that it can't be in their code because their unit tests are
infallible is crazy. I could present the other straw man, the developer
who says "But there can't be a bug: I've walked through all the code
and it did what it should."

When presented with a problem, you verify that it really *is* a
problem, write a unit test which shows that it's a problem (and check
that that unit test fails on the current code), then fix the code and
check that the unit test now passes.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #31
You know Harry, this is really true. I have a C++ background and if I can't
step thru that sucker to see what's going on, I get spooked because so many
things can go wrong in C++ and not show it's butt in a unit test. I think
that's probably less of a problem in .NET but it's a habit for me that i
can't shake off. Just today this guy thought my code had a bug, while he was
explaining i was stepping with my debugger (not really listening to him
while he was yapping ha ha) i was able to determine a little before he was
finished yapping that he was doing something wrong. what can i say, old
habits are as bad as good habits. i'll admit i have terrible habits.
"Harry Bosch" <no**@given.com> wrote in message
news:Xn***********************@207.46.248.16...
"Niall" <as**@me.com> wrote:
I know what you mean. I'm not saying that the debugger is useless or a
sin, I just don't agree that it should be entrenched as "required
practice" in development, which is not what you're saying anyway (I
think). Stepping through the code can be helpful to see if you've
designed your code badly, but I don't think it's a requirement. If
unit tests show that the program does what it's supposed to, and its
performance is acceptable, then you can release it to the client
without ever needing to step through the code.
I often step through newly written code, to see if it's doing what I

expect (or hope :-). This was more of a standard practice for me when I was doing C/C++, because often the API docs are unclear or insufficient, and you're
not entirely sure what you're getting back unless you step through and
actually look at it (the DWORD, the buffer, etc.). I do this far less now
in .NET, almost more of an old habit than a need, but it does point out
that in the unmanaged C++ world your certainty on the correctness of your
code is low.

--
harry

Nov 15 '05 #32
Alvin Bruney <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote:
You know Harry, this is really true. I have a C++ background and if I can't
step thru that sucker to see what's going on, I get spooked because so many
things can go wrong in C++ and not show it's butt in a unit test. I think
that's probably less of a problem in .NET but it's a habit for me that i
can't shake off.


That makes a lot more sense. One of the things I was thinking about
when reading this thread is that if your code is sufficiently complex
that just looking at it, it isn't crystal clear what's going on (so
that peer review can consist just of looking at the code) then it
should almost certainly be simplified. The mark of a great engineer
isn't that he produces complex code which a less able engineer can't
understand, but that he can see his way clear to solving a complex
problem by writing code that everyone can understand :)

Of course, the various bugs in the VS.NET 2002 debugger make me
somewhat more wary of stepping through code as well... I tend to regard
stepping through the code in a debugger as a last resort as it only
shows you what's happening *this* time. I'd rather take time to look at
the whole code of a sequence and understand it in a more global sense,
and then work out why it's not behaving as it should.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #33
Jon Skeet <sk***@pobox.com> wrote:
That makes a lot more sense. One of the things I was thinking about
when reading this thread is that if your code is sufficiently complex
that just looking at it, it isn't crystal clear what's going on (so
that peer review can consist just of looking at the code) then it
should almost certainly be simplified. The mark of a great engineer
isn't that he produces complex code which a less able engineer can't
understand, but that he can see his way clear to solving a complex
problem by writing code that everyone can understand :)


You hit the nail on the head. The point about being able to just look at
the code and see that it is (or is not) correct is the goal of code
clarity and simplification. And if you can write code that others can
understand clearly, you're that much better at it.

One thing I always hated when reviewing C++ code is those horrible string
loops some people loved to write. You know the kind, where it's a mass of
pointer variables with nested and's and or's, pre-/post-
increment/decrement operators, and inline assignments, all shoved into as
little space as possible. And most of the time, there was something in
the standard library that would do what the messy code was attempting, or
at least something that could simplify it. You can't look at code like
that and know if it is correct or not, you have to trace through it and
track the pointer values as you go, and check for ALL of the null
pointer, past-the-end, and off-by-one errors. And the programmer would
justify it as being "efficient" :-) Things like that belong in a library,
written and debugged once (and optimized, if deemed necessary). And the
standard classes and libraries have most of this already.

As a funny aside, when I interviewed at Microsoft I was asked to write
some super-efficient string handling code at the whiteboard. I had to
laugh (to myself, of course :-) -- of all the things I could have been
asked about! OTOH, I also had some fascinating discussions with some of
the PM's, who were quite sharp and knowledgeable, so it was not all
inappropriate.

--
harry
Nov 15 '05 #34
There's not too much in here that I haven't already said before in this
thread, so I'll try to be brief (this time, for once :P)

"Mark Pearce" <ev**@bay.com> wrote in message
news:OK**************@TK2MSFTNGP09.phx.gbl...
Hi Niall,
If unit tests show that the program does what it's supposed to, and its
performance is acceptable, then you can release it to the client without
ever needing to step through the code. <<

There are so many things wrong with this attitude that I don't know where

to start debugging your thought processes!
I'm surprised it took so long for someone to come out of the woodwork and
say something like this :P

Unit tests can only test your current thinking about what your code should
be doing. Most developers can virtually guarantee that the first ideas about what their code should be doing is faulty. One of the major tools for
correcting your ideas is to walk through your code in a debugger to watch
the code flow and data flow. Relying on unit tests is just intellectual
laziness, a prop for faulty thinking.
Unit tests are evolved as your requirements are evolved. Just as unit tests
represent your current thinking, so does your step through debugging. If you
don't know that your function should behave in X manner yet, you won't write
a unit test to ensure it does, and you won't realise it's not doing that
when you step through your code. In most cases, anything you are
subconsciously checking as you step through your code can and should become
an explicit test. If you do this, you make sure everyone has the same ideas
about what the code should be doing.

The only time I think this can be a problem is if you're dealing with API
stuff, it might be pretty difficult to test what's going on outside of your
control.

In addition, code coverage tools used to verify your unit tests are not very useful. They can't identify what the code should be doing, or what it isn't doing. But developers tend to use these tools to say "But there can't be a
bug: my unit tests are infallible and my code coverage tool verified that
all paths have been executed".


But the tools can be examined. If someone has some buggy code, and has a
unit test which should have picked that up, you can look at what happened
and work things out, improve the test. If someone has missed some buggy
functionality in a step through debugging, then by the time the bug is
discovered, no one has any idea why the bug wasn't originally caught. At
least with unit tests, you can improve their usage through examining
mistakes.

As you say, there's no silver bullet. But I still contend that it's a rare
case where you can pick something up with a debugger that you wouldn't have
picked up if you correctly used unit tests. And (as I've said heaps of times
before), unit tests have a lot of other advantages apart from just ensuring
that code is correct.

You say that relying on unit tests is laziness. At my company, not writing
the tests is laziness. It's easy to write some code, have a step through it,
bring up the form that uses it and toy around with it for a bit. But if you
don't stop to document what behaviour you want and how to push the code and
make sure it works properly, then only the original programmer benefits from
that testing, and they only benefit the once - when they do the test. After
that, there's no persisting benefit of the debug step through. I don't have
some kind of a rule saying I won't use the debugger, I just don't use it to
replace a unit test.

Man, if I could have written this much so easily at high school, I would
have scored much better in those english essays..

Niall
Nov 15 '05 #35
Hi Niall,

I'm actually not going to get in between you and Mark here ;), I just had a
couple of quick questions on getting started with unit testing.

Can you point me to any resources that will give me quick overview of unit
testing? Are there any tools that you recommend? I've briefly looked at
NUnit -- is that the way to go?

Thanks,
Jeff

"Niall" <as**@me.com> wrote in message
news:uS**************@TK2MSFTNGP10.phx.gbl...
> Just as unit tests represent your current thinking, so does your step through debugging. <<

On the contrary, stepping through code shows you what's *actually*
happening, not what you think is happening, or what you think should be
happening. This is a crucial distinction, and what makes stepping through
code such a powerful technique.


What I meant was this: you can only validate the workings of a function
based on what you currently think it should be doing. If a function is
giving incorrect output, but you don't know that it's incorrect, then you
won't notice when you are stepping through the code. If a square root
funciton is giving you 3 when you pass it 16, then you have to know that
you're expecting 4 before you will realise it's behaving incorrectly. So
what I'm saying is that there's very little different between stepping
through your code with the number 4 in your head than writing a unit test
which calls the function and compares the result to 4. The advantage of

the latter one is that with the unit test, you've documented the expected
behaviour and ensured that if, at any time, the function stops meeting that requirement, it will be known instantly, without waiting for the next person to step through the code.

The only time I think this can be a problem is if you're dealing with API
stuff, it might be pretty difficult to test what's going on outside of your
control. <<

Well, increasingly nowadays with component software, and especially with
.NET, the majority of all commercial programming involves the extensive

use
of APIs. The .NET Framework itself is just one huge set of APIs. And every time you use a third-party control library, you're making dozens of
assumptions about the behaviour of that control. The same goes with any
other third-party library.


By APIs, I was referring more to calls that go outside your environment.
Sure, .Net is a bunch of APIs, but .Net is your environment, and it's very
easy to look at the values of .Net types in a unit test and ensure they

are being set correctly by your code.

But I still contend that it's a rare case where you can pick
something
up
with a debugger that you wouldn't have picked up if you correctly used unit
tests. <<

But that's a self-fulfilling prophecy, because you've already stated

that you don't often step through code. Try, for just one week, to step through every piece of code as you write it. Be intellectually honest during that week, and try to identify where this technique is helping you to find
design, construction, and testing bugs.


From this, I think you are using stepping through the code to pick up more
than false results. Unit testing is a good tool to help you design your

code to meet the requirements/expected behaviour, but it's not a tool that's
meant to be used to design application architectecture, etc, and I haven't
claimed that it is. The point of unit testing is to pin the program to the
requirements, because the requirements are the most important thing,
otherwise why are you writing the program. The advantage of this approach is that you can change your design however you want, as long as the tests are
passing and you've written them properly, then your program should still be meeting the requirements. So when you talk about design and construction
bugs, unit testing isn't aimed at these situations.

People seem to be confused about unit testing thinking it either means
automated scripts that are executed by some GUI testing framework
application, testing things like "Did the control paint itself?" etc, or
thinking that you just call your code and if there's no exception, then
all's dandy. That Test Automation Snake Oil paper you posted the link to was talking about GUI test scripts, not unit testing, for example.

Unit testing is under the hood testing, as is step through debugging. What I am talking about is a specific test condition for each and every expected
behaviour of the function. Test condition and the expected behaviour are two angles on the same thing. This is why I don't see how I could pick up
incorrect results of a function from stepping through but not from a unit
test, because every single thing I know the function should be doing gets
put into the unit test. Hence, any remaining bugs are bugs I wouldn't
recognise anyway, so what is stepping through the code going to do for me?

As I've said before, I do use the debugger to step through code, but not to ensure code correctness, and hence it's not a religious requirement for me
to step through every line I write. It is, however, a requirement for me to test what I write.

I find several of these bugs every day by stepping through my code and
through code belonging to other developers. It's a very powerful debugging technique, but of course should be used in addition to your manual and
automated unit tests. You should be using every tool in your debugging
arsenal, rather than just cherry-picking the ones that you believe (possibly
mistakenly) to give the most bang-per-buck.


I do use a range of tools, but I believe that for testing code

correctness, unit testing is more thorough, faster, and more communicative to the other
developers than stepping through the function. If you write a function, step through it and decide it's working ok, but the next person changes the code a little, you need to make sure they are checking for exactly the same
things as you were when you stepped through. Otherwise, there is the
possibility they can break something without realising it. If you document
all the requirements of the function in a unit test, then you force the next person to come along to adhere to your contract.

Maybe I'm missing something about the way you step through your code. How,
exactly, do you verify that it's correct? I'm not talking about design or
performance, just that the results of the function are what they should be, given the inputs/state of the object, etc. If I was doing it, I would be
stepping through the code, looking at values of the results of calculations, ensuring that the right branches in conditional statements were being taken, exceptions were being thrown when they should be, this kind of thing. All
this you can do in a unit test. What more do you check for, and how do you
do it?

Niall

Nov 15 '05 #36
I don't know too many sites talking about unit testing, but it gets lots of
hits on google.

There is: http://www.extremeprogramming.org/rules/unittests.html which is
fairly generalistic, but it gives links to other things on the site related
to the unit tests and how they can be used.

There is also http://www.c2.com/cgi/wiki?UnitTest. On this site, they talk
about XP style unit tests being called "Programmer Tests", though I have
never heard them referred to as such before. There is heaps of information
on lots of things on this site, though I find I can wander for a long time
through all the links.

This is the first job I've had where unit tests are used, so I've only used
one framework, which is NUnit. We actually run on the older version of NUnit
because we've been using it for about 2 years, and we've modified the code
(it's open source, which is useful) significantly to meet our needs. As a
result, migrating to the newer NUnit would be a decent bit of work, and what
we have now does the trick. The new NUnit has a few features which are quite
handy, like the ability to unload the assembly while the test rig is
running, so you can run your test, go fix the code and run again without
having to restart the test app. So yeah, NUnit is pretty good. There may be
better ones out there, I'm not really sure.

Niall
"Jeff Ogata" <jo****@eatmyspam.com> wrote in message
news:e$**************@TK2MSFTNGP12.phx.gbl...
Hi Niall,

I'm actually not going to get in between you and Mark here ;), I just had a couple of quick questions on getting started with unit testing.

Can you point me to any resources that will give me quick overview of unit
testing? Are there any tools that you recommend? I've briefly looked at
NUnit -- is that the way to go?

Thanks,
Jeff

Nov 15 '05 #37
> This is perhaps the crux of where we differ - your attitude seems to be
that
your main responsibility is to ensure that the code you're testing meets the stated requirements. In my case, that's the very least of what I'm looking
for - indeed, I am usually surprised if my code has this type of bug, so I
don't tend to spend the majority of my time looking for it.

Instead, the majority of my time is spent looking for omissions or mistakes in the requirements, and for design bugs and implementation mistakes.
Stepping though my code is one essential technique here, and so is the use
of code reviews. Studies show that each of these techniques finds a
different set of bugs.
What do you mean by implementation mistakes? Do you mean mistakes such that
the code doesn't do what it's supposed to do, or mistakes such as slower
code, messy code, etc? What I'm wondering is how you can be sure that your
code fully meets the requirements if you don't test it against the
requirements?

I agree that bad design, slower code isn't really the domain of unit
testing. With the performance thing, if a particular function has to run in
less than a certain amount of time, you can always write a test that fails
if it takes longer. I guess the attitude of the unit test is to give you the
most important thing - a program which does what the customer wants. At the
end of the day, the customer doesn't care if you have spaghetti code, as
long as the program works. On the other hand, if you have a great design,
but your program doesn't do what they want, then they won't be pleased.

I'm not advocating a mindset of "It works, meets the requirements fully, so
lock it away and it doesn't matter if it has a bad design." In fact, it's a
bit the opposite. Once you have the unit tests solidly down, then you know
your code meets the requirements, and you know that if something breaks, you
will find out about it. So at any time, anyone can come along and change the
code to what they think is a better design, faster, etc. So the unit test
doesn't test design, but it gives you a safeguard to facilitate design
changes.

To their surprise, the breakpoint was never hit and the test completed
successfully. A quick investigation with the debugger showed that a function a few steps up the call chain had an optimisation that allowed it sometimes to skip unnecessary work. In this case, it skipped the new code. Treating
the code as a black box in this case just didn't work - the developer put in some inputs, got correct outputs and entirely missed that fact that his new code wasn't being executed at all.
This isn't the way I write unit tests, and as far as I've seen, it's not the
way that they're supposed to be written, either. The unit test is supposed
to isolate and target specific areas of code. So there should be a test that
specifically targets the function in question, ignoring the optimisation. As
for the optimisation in the other method - there should be a unit test that
specifically targets that as well. Unit tests which test at a higher level
are ok too, and probably a good idea to watch the behaviour of the object at
a higher level as well, but like I said before, unit testing is under the
hood testing.

Here's another example, closer to home. The following code shows a nasty
bug:

bool AccessGranted = true;

try
{
// See if we have access to c:\test.txt
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
}
catch (SecurityException x)
{
// access denied
AccessGranted = false;
}
catch (...)
{
// something else happened
}

If the CLR grants access to the test file in this example, everything works fine. If the CLR denies access, everything works fine as well because a
SecurityException is thrown. But what happens if the discretionary access
control list (DACL) on the file doesn't grant access? Then a different
exception is thrown, but AccessGranted will return true because of the
optimistic assumption made on the first line of code. The bug was really in the requirements as well as in the code, because they didn't state what
should happen if the CLR granted access, but the file system didn't.
Stepping through this code would have shown that a completely different
exception was being thrown when the DACL denied access, and therefore would have pointed to the omission in requirements as well as finding the bug.


I think this could have been coded better. In general, when you have code
that is trying to discover if you can do a certain thing, it should presume
the negative in the beginning. Especially for things like "Do I have the
security rights to do <xyz>" it should always be false in the beginning.
That way you ensure that the only time you ever think you have the right to
do the action is when the code that recogises the affirmative is run.

I'm not sure if you mean the base Exception type by the "..." in your code.
I wouldn't write code that catches just a plain exception unless it was
planning to wrap the exception in a more meaningful one and re-throw. If you
didn't mean to catch the base Exception type there, then the DACL exception
wouldn't be caught, and the unit test would show an error. This is the way
NUnit behaves - if an assertion fails, you get a test failure, if an
exception escapes from the test, you get a test error.

Personally, I would have written the function like:

bool AccessGranted = false;
try
{
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
AccessGranted = true;
}
catch (SecurityException)
{
}

But, of course, an even more powerful tool than unit testing is hindsight...

Niall
Nov 15 '05 #38
> This is perhaps the crux of where we differ - your attitude seems to be
that
your main responsibility is to ensure that the code you're testing meets the stated requirements. In my case, that's the very least of what I'm looking
for - indeed, I am usually surprised if my code has this type of bug, so I
don't tend to spend the majority of my time looking for it.

Instead, the majority of my time is spent looking for omissions or mistakes in the requirements, and for design bugs and implementation mistakes.
Stepping though my code is one essential technique here, and so is the use
of code reviews. Studies show that each of these techniques finds a
different set of bugs.
What do you mean by implementation mistakes? Do you mean mistakes such that
the code doesn't do what it's supposed to do, or mistakes such as slower
code, messy code, etc? What I'm wondering is how you can be sure that your
code fully meets the requirements if you don't test it against the
requirements?

I agree that bad design, slower code isn't really the domain of unit
testing. With the performance thing, if a particular function has to run in
less than a certain amount of time, you can always write a test that fails
if it takes longer. I guess the attitude of the unit test is to give you the
most important thing - a program which does what the customer wants. At the
end of the day, the customer doesn't care if you have spaghetti code, as
long as the program works. On the other hand, if you have a great design,
but your program doesn't do what they want, then they won't be pleased.

I'm not advocating a mindset of "It works, meets the requirements fully, so
lock it away and it doesn't matter if it has a bad design." In fact, it's a
bit the opposite. Once you have the unit tests solidly down, then you know
your code meets the requirements, and you know that if something breaks, you
will find out about it. So at any time, anyone can come along and change the
code to what they think is a better design, faster, etc. So the unit test
doesn't test design, but it gives you a safeguard to facilitate design
changes.

To their surprise, the breakpoint was never hit and the test completed
successfully. A quick investigation with the debugger showed that a function a few steps up the call chain had an optimisation that allowed it sometimes to skip unnecessary work. In this case, it skipped the new code. Treating
the code as a black box in this case just didn't work - the developer put in some inputs, got correct outputs and entirely missed that fact that his new code wasn't being executed at all.
This isn't the way I write unit tests, and as far as I've seen, it's not the
way that they're supposed to be written, either. The unit test is supposed
to isolate and target specific areas of code. So there should be a test that
specifically targets the function in question, ignoring the optimisation. As
for the optimisation in the other method - there should be a unit test that
specifically targets that as well. Unit tests which test at a higher level
are ok too, and probably a good idea to watch the behaviour of the object at
a higher level as well, but like I said before, unit testing is under the
hood testing.

Here's another example, closer to home. The following code shows a nasty
bug:

bool AccessGranted = true;

try
{
// See if we have access to c:\test.txt
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
}
catch (SecurityException x)
{
// access denied
AccessGranted = false;
}
catch (...)
{
// something else happened
}

If the CLR grants access to the test file in this example, everything works fine. If the CLR denies access, everything works fine as well because a
SecurityException is thrown. But what happens if the discretionary access
control list (DACL) on the file doesn't grant access? Then a different
exception is thrown, but AccessGranted will return true because of the
optimistic assumption made on the first line of code. The bug was really in the requirements as well as in the code, because they didn't state what
should happen if the CLR granted access, but the file system didn't.
Stepping through this code would have shown that a completely different
exception was being thrown when the DACL denied access, and therefore would have pointed to the omission in requirements as well as finding the bug.


I think this could have been coded better. In general, when you have code
that is trying to discover if you can do a certain thing, it should presume
the negative in the beginning. Especially for things like "Do I have the
security rights to do <xyz>" it should always be false in the beginning.
That way you ensure that the only time you ever think you have the right to
do the action is when the code that recogises the affirmative is run.

I'm not sure if you mean the base Exception type by the "..." in your code.
I wouldn't write code that catches just a plain exception unless it was
planning to wrap the exception in a more meaningful one and re-throw. If you
didn't mean to catch the base Exception type there, then the DACL exception
wouldn't be caught, and the unit test would show an error. This is the way
NUnit behaves - if an assertion fails, you get a test failure, if an
exception escapes from the test, you get a test error.

Personally, I would have written the function like:

bool AccessGranted = false;
try
{
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
AccessGranted = true;
}
catch (SecurityException)
{
}

But, of course, an even more powerful tool than unit testing is hindsight...

Niall
Nov 15 '05 #39
> most important thing - a program which does what the customer wants. At
the
end of the day, the customer doesn't care if you have spaghetti code, as
long as the program works.


that stopped being true years ago. customers now want their source code, or
even their inhouse programmers to look at the code. they are getting wiser
about having it done properly in the first place so high priced programmers
don't bleed them dry trying to make bread out sphaghetti. spahgetti.
spahghetti. i know it doesn't look right, i just can't remember how to spell
it.
Nov 15 '05 #40
> most important thing - a program which does what the customer wants. At
the
end of the day, the customer doesn't care if you have spaghetti code, as
long as the program works.


that stopped being true years ago. customers now want their source code, or
even their inhouse programmers to look at the code. they are getting wiser
about having it done properly in the first place so high priced programmers
don't bleed them dry trying to make bread out sphaghetti. spahgetti.
spahghetti. i know it doesn't look right, i just can't remember how to spell
it.
Nov 15 '05 #41
Hi Niall,
What I'm wondering is how you can be sure that your code fully meets the requirements if you don't test it against the requirements? <<

Of course I test against requirements, but for me, that's just the first
step. After that, I start testing to find requirements bugs, design bugs,
implementation bugs, etc.
I guess the attitude of the unit test is to give you the most important thing - a program which does what the customer wants. At the end of the day,
the customer doesn't care if you have spaghetti code, as long as the program
works. <<

If you're doing custom software development, your customer certainly cares
deeply about the internal quality of the code. The customer's staff will
have to debug, maintain and enhance the code for months and years to come.
If you're doing product development, the end-customers may not care about
the internal quality of the code, but the company for which you're
developing the product certainly cares. Once again, the company will have to
debug, maintain and enhance the product for a long time.

So somebody, somewhere, *always* cares about the code's internal quality.
So the unit test doesn't test design, but it gives you a safeguard to facilitate design changes. <<

Here at least we agree on something!
The unit test is supposed to isolate and target specific areas of code. <<

The developer in question thought that he *was* targeting a specific area of
code. He didn't know about the optimisation, and indeed had no way of
knowing about the optimisation without stepping through the code in a
source-level debugger.
I think this could have been coded better. <<
Of course in hindsight, it should have been coded better. But you've just
been arguing that the internal code quality doesn't really matter, as long
as the unit tests are satisfied. You can't have it both ways.
I wouldn't write code that catches just a plain exception unless it was
planning to wrap the exception in a more meaningful one and re-throw. <<

There are various reasons to catch System.Exception, including the one you
mentioned. Another reason is to reverse a transaction after any exception.
Yet another reason is that many developers often put in a System.Exception
catch during testing, and forget to remove it. And yet another reason is
that developers sometimes don't realise that you shouldn't catch
System.Exception without rethrowing it. Finally, a few of the .NET base
class methods suppress all exceptions for "security" reasons, and just fail
silently.

Whatever the reason, you won't find this type of requirements/coding bug
unless you step through the code and find that an unexpected exception type
was being silently thrown and caught.

Regards,

Mark
----
Author of "Comprehensive VB .NET Debugging"
http://www.apress.com/book/bookDisplay.html?bID=128
"Niall" <as**@me.com> wrote in message
news:uw**************@TK2MSFTNGP12.phx.gbl... This is perhaps the crux of where we differ - your attitude seems to be that your main responsibility is to ensure that the code you're testing meets the stated requirements. In my case, that's the very least of what I'm looking
for - indeed, I am usually surprised if my code has this type of bug, so I
don't tend to spend the majority of my time looking for it.

Instead, the majority of my time is spent looking for omissions or mistakes in the requirements, and for design bugs and implementation mistakes.
Stepping though my code is one essential technique here, and so is the use
of code reviews. Studies show that each of these techniques finds a
different set of bugs.
What do you mean by implementation mistakes? Do you mean mistakes such that
the code doesn't do what it's supposed to do, or mistakes such as slower
code, messy code, etc? What I'm wondering is how you can be sure that your
code fully meets the requirements if you don't test it against the
requirements?

I agree that bad design, slower code isn't really the domain of unit
testing. With the performance thing, if a particular function has to run in
less than a certain amount of time, you can always write a test that fails
if it takes longer. I guess the attitude of the unit test is to give you the
most important thing - a program which does what the customer wants. At the
end of the day, the customer doesn't care if you have spaghetti code, as
long as the program works. On the other hand, if you have a great design,
but your program doesn't do what they want, then they won't be pleased.

I'm not advocating a mindset of "It works, meets the requirements fully, so
lock it away and it doesn't matter if it has a bad design." In fact, it's a
bit the opposite. Once you have the unit tests solidly down, then you know
your code meets the requirements, and you know that if something breaks, you
will find out about it. So at any time, anyone can come along and change the
code to what they think is a better design, faster, etc. So the unit test
doesn't test design, but it gives you a safeguard to facilitate design
changes.

To their surprise, the breakpoint was never hit and the test completed
successfully. A quick investigation with the debugger showed that a function a few steps up the call chain had an optimisation that allowed it sometimes to skip unnecessary work. In this case, it skipped the new code. Treating
the code as a black box in this case just didn't work - the developer put in some inputs, got correct outputs and entirely missed that fact that his new code wasn't being executed at all.
This isn't the way I write unit tests, and as far as I've seen, it's not the
way that they're supposed to be written, either. The unit test is supposed
to isolate and target specific areas of code. So there should be a test that
specifically targets the function in question, ignoring the optimisation. As
for the optimisation in the other method - there should be a unit test that
specifically targets that as well. Unit tests which test at a higher level
are ok too, and probably a good idea to watch the behaviour of the object at
a higher level as well, but like I said before, unit testing is under the
hood testing.

Here's another example, closer to home. The following code shows a nasty
bug:

bool AccessGranted = true;

try
{
// See if we have access to c:\test.txt
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
}
catch (SecurityException x)
{
// access denied
AccessGranted = false;
}
catch (...)
{
// something else happened
}

If the CLR grants access to the test file in this example, everything works fine. If the CLR denies access, everything works fine as well because a
SecurityException is thrown. But what happens if the discretionary access
control list (DACL) on the file doesn't grant access? Then a different
exception is thrown, but AccessGranted will return true because of the
optimistic assumption made on the first line of code. The bug was really in the requirements as well as in the code, because they didn't state what
should happen if the CLR granted access, but the file system didn't.
Stepping through this code would have shown that a completely different
exception was being thrown when the DACL denied access, and therefore would have pointed to the omission in requirements as well as finding the bug.


I think this could have been coded better. In general, when you have code
that is trying to discover if you can do a certain thing, it should presume
the negative in the beginning. Especially for things like "Do I have the
security rights to do <xyz>" it should always be false in the beginning.
That way you ensure that the only time you ever think you have the right to
do the action is when the code that recogises the affirmative is run.

I'm not sure if you mean the base Exception type by the "..." in your code.
I wouldn't write code that catches just a plain exception unless it was
planning to wrap the exception in a more meaningful one and re-throw. If you
didn't mean to catch the base Exception type there, then the DACL exception
wouldn't be caught, and the unit test would show an error. This is the way
NUnit behaves - if an assertion fails, you get a test failure, if an
exception escapes from the test, you get a test error.

Personally, I would have written the function like:

bool AccessGranted = false;
try
{
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
AccessGranted = true;
}
catch (SecurityException)
{
}

But, of course, an even more powerful tool than unit testing is hindsight...

Niall

Nov 15 '05 #42
> catch during testing, and forget to remove it. And yet another reason is
that developers sometimes don't realise that you shouldn't catch
System.Exception without rethrowing it. Finally, a few of the .NET base
i totally disagree with that statement. i'm here wondering why you made it.
would you care to explain? I can partially see your point but that statement
is just too general to let slide.

"Mark Pearce" <ev**@bay.com> wrote in message
news:#h**************@tk2msftngp13.phx.gbl... Hi Niall,
What I'm wondering is how you can be sure that your code fully meets
the
requirements if you don't test it against the requirements? <<

Of course I test against requirements, but for me, that's just the first
step. After that, I start testing to find requirements bugs, design bugs,
implementation bugs, etc.
I guess the attitude of the unit test is to give you the most important thing - a program which does what the customer wants. At the end of the day, the customer doesn't care if you have spaghetti code, as long as the program works. <<

If you're doing custom software development, your customer certainly cares
deeply about the internal quality of the code. The customer's staff will
have to debug, maintain and enhance the code for months and years to come.
If you're doing product development, the end-customers may not care about
the internal quality of the code, but the company for which you're
developing the product certainly cares. Once again, the company will have to debug, maintain and enhance the product for a long time.

So somebody, somewhere, *always* cares about the code's internal quality.
So the unit test doesn't test design, but it gives you a safeguard to facilitate design changes. <<

Here at least we agree on something!
The unit test is supposed to isolate and target specific areas of code. <<

The developer in question thought that he *was* targeting a specific area of code. He didn't know about the optimisation, and indeed had no way of
knowing about the optimisation without stepping through the code in a
source-level debugger.
I think this could have been coded better. <<
Of course in hindsight, it should have been coded better. But you've just
been arguing that the internal code quality doesn't really matter, as long
as the unit tests are satisfied. You can't have it both ways.
I wouldn't write code that catches just a plain exception unless it was planning to wrap the exception in a more meaningful one and re-throw. <<

There are various reasons to catch System.Exception, including the one you
mentioned. Another reason is to reverse a transaction after any exception.
Yet another reason is that many developers often put in a System.Exception
catch during testing, and forget to remove it. And yet another reason is
that developers sometimes don't realise that you shouldn't catch
System.Exception without rethrowing it. Finally, a few of the .NET base
class methods suppress all exceptions for "security" reasons, and just fail silently.

Whatever the reason, you won't find this type of requirements/coding bug
unless you step through the code and find that an unexpected exception type was being silently thrown and caught.

Regards,

Mark
----
Author of "Comprehensive VB .NET Debugging"
http://www.apress.com/book/bookDisplay.html?bID=128
"Niall" <as**@me.com> wrote in message
news:uw**************@TK2MSFTNGP12.phx.gbl...
This is perhaps the crux of where we differ - your attitude seems to be that
your main responsibility is to ensure that the code you're testing meets

the
stated requirements. In my case, that's the very least of what I'm

looking for - indeed, I am usually surprised if my code has this type of bug, so I don't tend to spend the majority of my time looking for it.

Instead, the majority of my time is spent looking for omissions or

mistakes
in the requirements, and for design bugs and implementation mistakes.
Stepping though my code is one essential technique here, and so is the use of code reviews. Studies show that each of these techniques finds a
different set of bugs.


What do you mean by implementation mistakes? Do you mean mistakes such

that the code doesn't do what it's supposed to do, or mistakes such as slower
code, messy code, etc? What I'm wondering is how you can be sure that your
code fully meets the requirements if you don't test it against the
requirements?

I agree that bad design, slower code isn't really the domain of unit
testing. With the performance thing, if a particular function has to run in less than a certain amount of time, you can always write a test that fails
if it takes longer. I guess the attitude of the unit test is to give you the most important thing - a program which does what the customer wants. At the end of the day, the customer doesn't care if you have spaghetti code, as
long as the program works. On the other hand, if you have a great design,
but your program doesn't do what they want, then they won't be pleased.

I'm not advocating a mindset of "It works, meets the requirements fully, so lock it away and it doesn't matter if it has a bad design." In fact, it's a bit the opposite. Once you have the unit tests solidly down, then you know
your code meets the requirements, and you know that if something breaks, you will find out about it. So at any time, anyone can come along and change the code to what they think is a better design, faster, etc. So the unit test
doesn't test design, but it gives you a safeguard to facilitate design
changes.

To their surprise, the breakpoint was never hit and the test completed
successfully. A quick investigation with the debugger showed that a function
a few steps up the call chain had an optimisation that allowed it

sometimes
to skip unnecessary work. In this case, it skipped the new code. Treating the code as a black box in this case just didn't work - the developer put in
some inputs, got correct outputs and entirely missed that fact that his new
code wasn't being executed at all.


This isn't the way I write unit tests, and as far as I've seen, it's not

the way that they're supposed to be written, either. The unit test is supposed
to isolate and target specific areas of code. So there should be a test that specifically targets the function in question, ignoring the optimisation. As for the optimisation in the other method - there should be a unit test that specifically targets that as well. Unit tests which test at a higher level
are ok too, and probably a good idea to watch the behaviour of the object at a higher level as well, but like I said before, unit testing is under the
hood testing.

Here's another example, closer to home. The following code shows a nasty
bug:

bool AccessGranted = true;

try
{
// See if we have access to c:\test.txt
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
}
catch (SecurityException x)
{
// access denied
AccessGranted = false;
}
catch (...)
{
// something else happened
}

If the CLR grants access to the test file in this example, everything works
fine. If the CLR denies access, everything works fine as well because a
SecurityException is thrown. But what happens if the discretionary

access control list (DACL) on the file doesn't grant access? Then a different
exception is thrown, but AccessGranted will return true because of the
optimistic assumption made on the first line of code. The bug was really

in
the requirements as well as in the code, because they didn't state what
should happen if the CLR granted access, but the file system didn't.
Stepping through this code would have shown that a completely different
exception was being thrown when the DACL denied access, and therefore

would
have pointed to the omission in requirements as well as finding the bug.


I think this could have been coded better. In general, when you have code
that is trying to discover if you can do a certain thing, it should

presume the negative in the beginning. Especially for things like "Do I have the
security rights to do <xyz>" it should always be false in the beginning.
That way you ensure that the only time you ever think you have the right to do the action is when the code that recogises the affirmative is run.

I'm not sure if you mean the base Exception type by the "..." in your code. I wouldn't write code that catches just a plain exception unless it was
planning to wrap the exception in a more meaningful one and re-throw. If you didn't mean to catch the base Exception type there, then the DACL exception wouldn't be caught, and the unit test would show an error. This is the way
NUnit behaves - if an assertion fails, you get a test failure, if an
exception escapes from the test, you get a test error.

Personally, I would have written the function like:

bool AccessGranted = false;
try
{
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
AccessGranted = true;
}
catch (SecurityException)
{
}

But, of course, an even more powerful tool than unit testing is hindsight...
Niall

Nov 15 '05 #43
Hi Alvin,

The current "best practice" for exception management is that you shouldn't
catch an exception unless you expected that exception, you understand it,
and you're going to deal with it. Instead, you should let exceptions that
you don't know how to handle bubble upwards to code that does know how to
handle that exception, or until the top-level exception handler is reached.

Catching System.Exception (without re-throwing it) is bad because it's
stating that you know how to handle *every* type of CLS-compliant exception,
even unusual exceptions such as ExecutionEngineException,
OutOfMemoryException and StackOverflowException. In general, your code won't
know how to handle *every* type of exception.

Of course, sometimes you need to catch every exception, such as when you
need to reverse a transaction or you want to create and throw a more
meaningful custom exception. But in each one of these cases, you're
re-throwing the System.Exception in some form.

Regards,

Mark
--
Author of "Comprehensive VB .NET Debugging"
http://www.apress.com/book/bookDisplay.html?bID=128
"Alvin Bruney" <vapordan_spam_me_not@hotmail_no_spamhotmail.com > wrote in
message news:eE**************@TK2MSFTNGP10.phx.gbl...
catch during testing, and forget to remove it. And yet another reason is
that developers sometimes don't realise that you shouldn't catch
System.Exception without rethrowing it. Finally, a few of the .NET base
i totally disagree with that statement. i'm here wondering why you made it.
would you care to explain? I can partially see your point but that statement
is just too general to let slide.

"Mark Pearce" <ev**@bay.com> wrote in message
news:#h**************@tk2msftngp13.phx.gbl... Hi Niall,
What I'm wondering is how you can be sure that your code fully meets
the
requirements if you don't test it against the requirements? <<

Of course I test against requirements, but for me, that's just the first
step. After that, I start testing to find requirements bugs, design bugs,
implementation bugs, etc.
I guess the attitude of the unit test is to give you the most important thing - a program which does what the customer wants. At the end of the day, the customer doesn't care if you have spaghetti code, as long as the program works. <<

If you're doing custom software development, your customer certainly cares
deeply about the internal quality of the code. The customer's staff will
have to debug, maintain and enhance the code for months and years to come.
If you're doing product development, the end-customers may not care about
the internal quality of the code, but the company for which you're
developing the product certainly cares. Once again, the company will have to debug, maintain and enhance the product for a long time.

So somebody, somewhere, *always* cares about the code's internal quality.
So the unit test doesn't test design, but it gives you a safeguard to facilitate design changes. <<

Here at least we agree on something!
The unit test is supposed to isolate and target specific areas of code. <<

The developer in question thought that he *was* targeting a specific area of code. He didn't know about the optimisation, and indeed had no way of
knowing about the optimisation without stepping through the code in a
source-level debugger.
I think this could have been coded better. <<
Of course in hindsight, it should have been coded better. But you've just
been arguing that the internal code quality doesn't really matter, as long
as the unit tests are satisfied. You can't have it both ways.
I wouldn't write code that catches just a plain exception unless it was
planning to wrap the exception in a more meaningful one and re-throw. <<

There are various reasons to catch System.Exception, including the one you
mentioned. Another reason is to reverse a transaction after any exception.
Yet another reason is that many developers often put in a System.Exception
catch during testing, and forget to remove it. And yet another reason is
that developers sometimes don't realise that you shouldn't catch
System.Exception without rethrowing it. Finally, a few of the .NET base
class methods suppress all exceptions for "security" reasons, and just

fail silently.

Whatever the reason, you won't find this type of requirements/coding bug
unless you step through the code and find that an unexpected exception type was being silently thrown and caught.

Regards,

Mark
----
Author of "Comprehensive VB .NET Debugging"
http://www.apress.com/book/bookDisplay.html?bID=128


Nov 15 '05 #44

"Mark Pearce" <ev**@bay.com> wrote in message
news:%2****************@tk2msftngp13.phx.gbl...
Hi Niall,
So somebody, somewhere, *always* cares about the code's internal quality.
Of course, but my point was that code that has a nice design, but doesn't
work is much harder to sell to the customer than code that could have its
design improved, but it actually does the job. I'm not saying that I have a
"just get it done, we can make it pretty later" attitude, just that it's
more important to make sure the program does what is needed. From my
experience, once you get a decent sized system, you can be forever
"improving" the design, because the design of the architecture can never
really fully facilitate all cases of usage.

The developer in question thought that he *was* targeting a specific area of code. He didn't know about the optimisation, and indeed had no way of
knowing about the optimisation without stepping through the code in a
source-level debugger.
Well, presumably the designer was aware of all the code in the method they
had written? The test should directly cause the methods being tested to be
run. To me, this is what testing in isolation means.

I think this could have been coded better. <<
Of course in hindsight, it should have been coded better. But you've just
been arguing that the internal code quality doesn't really matter, as long
as the unit tests are satisfied. You can't have it both ways.


As had been said before, bad practice in coding, unit testing or step
through debugging can bring both sides down. If the coder had stepped
through the function without causing the other type of exception to be
thrown, then no one would know about it. All I'm saying is that better
coding of the original method would have allowed the unit test to pick up
the fault. Unit testing doesn't ensure perfect coding practices, and neither
does stepping through with a debugger...

I wouldn't write code that catches just a plain exception unless it was
planning to wrap the exception in a more meaningful one and re-throw. <<

There are various reasons to catch System.Exception, including the one you
mentioned. Another reason is to reverse a transaction after any exception.
Yet another reason is that many developers often put in a System.Exception
catch during testing, and forget to remove it. And yet another reason is
that developers sometimes don't realise that you shouldn't catch
System.Exception without rethrowing it. Finally, a few of the .NET base
class methods suppress all exceptions for "security" reasons, and just

fail silently.

Whatever the reason, you won't find this type of requirements/coding bug
unless you step through the code and find that an unexpected exception type was being silently thrown and caught.


Any exception that escapes the function will cause the unit test to fail, so
if the coder was wrapping the exception and rethrowing, the unit test would
have caught it. If it was rolling back the transaction and then rethrowing,
the unit test would have caught that too. The only case that breaks the unit
test is when the exception is swallowed, which is bad practice. You can use
code smell type software to pick out this kind of thing. If you rely only on
the step through, you rely on the problem situation raising its head during
that one case of the step through.

Niall
Nov 15 '05 #45
Hi,

Jon Skeet wrote:
http://nunit.sf.net for NUnit itself


Is it me or are all the unit testing utilities identical.

I think I'll use csUnit.. just because I like their website better :)

Pete
Nov 15 '05 #46
Pete <pv*****@gawab.com> wrote:
Jon Skeet wrote:
http://nunit.sf.net for NUnit itself


Is it me or are all the unit testing utilities identical.

I think I'll use csUnit.. just because I like their website better :)


I suspect they're likely to be 90% the same - but different people will
each need the different 10% that different projects supply :)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #47
Yeah, I think they're probably much the same. I would expect NUnit and JUnit
to be similar...

Pete, if I were you, I'd have a look at the features of a few of them and
make your decision. If you go ahead with the testing and end up with several
thousand unit tests, it may be a pain to change to another testing platform
if you need to :P

I would definitely recommend that whatever test platform you use, you get
one that's open source. If you use testing a lot, it becomes part of the
culture, and you can come up with a lot of features which are helpful for
the team which aren't in the original product. You can also bend the
behaviour of the existing code to suit your purposes better.

I'll be interested to read Alvin's experiences, I hope they're positive.

Niall

"Jon Skeet" <sk***@pobox.com> wrote in message
news:MP************************@news.microsoft.com ...
Pete <pv*****@gawab.com> wrote:
Jon Skeet wrote:
http://nunit.sf.net for NUnit itself


Is it me or are all the unit testing utilities identical.

I think I'll use csUnit.. just because I like their website better :)


I suspect they're likely to be 90% the same - but different people will
each need the different 10% that different projects supply :)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too

Nov 15 '05 #48
Niall <as**@me.com> wrote:
Yeah, I think they're probably much the same. I would expect NUnit and JUnit
to be similar...
The were much *more* similar earlier on - NUnit has now been rewritten
to use attributes rather than relying on naming conventions, which
strikes me as a thoroughly good thing.

<snip>
I would definitely recommend that whatever test platform you use, you get
one that's open source.


You mean there are some which aren't open? :)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet/
If replying to the group, please do not mail me too
Nov 15 '05 #49

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

15
by: Tom | last post by:
Hi, Can anyone tell me wether with the release of the new Visual Studio Whidbey there will also be a new version of the .net framework? 1.2 or something? tnx
1
by: afh3 | last post by:
I have a client for whom I have developed several ASP.NET applications and web services. They recently upgraded their web servers to very large (>2TB) storage capacities. Now, the setup.exe...
9
by: John | last post by:
Hi We are approaching spring. When is the public beta expected? Thanks Regards
20
by: John | last post by:
Hi Does anyone know when next version of vs is due and what new vb features we can expect? Thanks Regards
23
by: Catherine Jones | last post by:
I need to register a Vb DLL under restricted acess privilages please provide me a solution Regards and thanks
0
by: spamfurnace | last post by:
Will Whidbey have the features, of a girl i'd like to meet? Will Whidbey be easy to talk to, and rub my tired feet? Will Whidbey bats it eyes at me, and whisper "Baby, your so sweet". Will...
5
by: rogsonl | last post by:
My computer was moved last week, and the company changed the network groups we work on. As a result, one of the main benefits from Whidbey (database connectivity) no longer works. Situation: 1....
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.