473,327 Members | 1,952 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,327 software developers and data experts.

A simple unit test framework

nw
Hi,

I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:

Thanks for your time!

class UnitTest {
private:
int tests_failed;
int tests_passed;
int total_tests_failed;
int total_tests_passed;
std::string test_set_name;
std::string current_file;
std::string current_description;

public:

UnitTest(std::string test_set_name_in) : tests_failed(0),

tests_passed(0),

total_tests_failed(0),

total_tests_passed(0),

current_file(),

current_description(),

test_set_name(test_set_name_in) {
std::cout << "*** Test set : " << test_set_name << std::endl;
}

void begin_test_set(std::string description, const char *filename) {
current_description = description;
current_file = filename;
tests_failed = 0;
tests_passed = 0;
std::cout << "****** Testing: " << current_description <<
std::endl;
}

void end_test_set() {
std::cout << "****** Test : " << current_description << "
complete, ";
std::cout << "passed " << tests_passed << ", failed " <<
tests_failed << "." << std::endl;
}

template<class _TestType>
bool test(_TestType t1,_TestType t2,int linenumber) {
bool test_result = (t1 == t2);

if(!test_result) {
std::cout << "****** FAILED : " << current_file << "," <<
linenumber;
std::cout << ": " << t1 << " is not equal to " << t2 <<
std::endl;
total_tests_failed++;
tests_failed++;
} else { tests_passed++; total_tests_passed++; }
}

void test_report() {
std::cout << "*** Test set : " << test_set_name << " complete, ";
std::cout << "passed " << total_tests_passed;
std::cout << " failed " << total_tests_failed << "." << std::endl;
if(total_tests_failed != 0) std::cout << "*** TEST FAILED!" <<
std::endl;
}
};

int main(void) {
// create a rectangle at position 0,0 with sides of length 10
UnitTest ut("Test Shapes");

// Test Class Rectangle
ut.begin_test_set("Rectangle",__FILE__);
Rectangle r(0,0,10,10);
ut.test(r.is_square(),true,__LINE__);
ut.test(r.area(),100.0,__LINE__);

Rectangle r2(0,0,1,5);
ut.test(r2.is_square(),true,__LINE__);
ut.test(r2.area(),5.0,__LINE__);
ut.end_test_set();

// Test Class Circle
ut.begin_test_set("Circle",__FILE__);
Circle c(0,0,10);
ut.test(c.area(),314.1592654,__LINE__);
ut.test(c.circumference(),62.831853080,__LINE__);

ut.end_test_set();

ut.test_report();

return 0;
}

May 3 '07
176 8087
On May 6, 1:27 am, Gianni Mariani <gi3nos...@mariani.wswrote:
Pete Becker wrote:
...
Yup. Typical developer-written test: I don't understand testing well
enough to do it right, so I'll do something random and hope to hit a
problem. <g>
I have yet to meet a "test" developer that can beat the monte carlo test
for coverage.
OK - I agree, there are cases where a monte-carlo test will never be
able to test adequately, but as a rule, it is better to have a MC test
than not. I have uncovered more legitimate problems from the MC test
than from carefully crafted tests.
Which proves that you don't have anyone who knows how to write
tests. A carefully crafted test will, by definition, find any
problem that a MC test will find.

In my experience, the main use of MC tests is to detect when
your tests aren't carefully crafted. Just as the main use of
testing is to validate your process---anytime a test reveals an
error, it is a sign that there is a problem in the process, and
that the process needs improvement.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 6 '07 #51
James Kanze wrote:
On May 5, 10:44 pm, Ian Collins <ian-n...@hotmail.comwrote:
>James Kanze wrote:
>>On May 4, 3:01 pm, anon <a...@no.nowrote:
>>>The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods).
>>The latest trend where? Certainly not in any company concerned
with good management, or quality software.
>Have you ever been in charge of a company's software development? I
have and the best thing I ever did to improve both the productivity of
the teams and quality of the code was to introduce eXtreme Programming,
which includes TDD as a core practice.
>Our delivery times and field defect rates more than vindicated the change.

I've worked with the people in charge. We evaluated the
procedure, and found that it simply didn't work. Looking at
other companies as well, none practicing eXtreme Programming
seem to be shipping products of very high quality. In fact, the
companies I've seen using it generally don't have the mechanisms
in place to actually measure quality or productivity, so they
don't know what the impact was.
We certainly did - field defect reports and the internal cost of
correcting them.
When I actually talk to the engineers involved, it turns out
that e.g. they weren't using any accepted means of achieving
quality before. It's certain that adopting TDD will improve
things if there was no testing what so ever previously.
Similarly, pair programming is more cost efficient that never
letting a second programmer look at, or at least understand,
another programmer's code, even if it is a magnitude or more
less efficient than a well run code review.
Have you tried it? Not having to hold code reviews was one of the
biggest real savings for us.
Compared to
established good practices, however, most of the suggestions in
eXtreme Programming represent a step backwards.
That's your opinion and you are entitled to it. Mine, through direct
experience, is diametrically opposed.

--
Ian Collins.
May 6 '07 #52
James Kanze wrote:
....
>
Yes, but nobody but an idiot would pay you for such a thing.
Thread safety, to site but the most obvious example, isn't
testable, so you just ignore it?
Common misconception.

1. Testability of code is a primary objective. (i.e. code that can't be
tested is unfit for purpose)

2. Any testing (MT or not) is about a level of confidence, not absoluteness.

I have discovered that MT test cases that push the limits of the code
using random input does provide sufficient coverage to produce a level
of confidence that makes the target "testable".

If you consider what happens when you have multiple processors
interacting randomly in a consistent system, you end up testing more
possibilities than can present themselves in a more systematic system.
However, with threading, it's not really systematic because external
events cause what would normally be systematic to be random. Now
consider what happens in a race condition failure. This normally
happens when two threads enter sections of code that should be mutually
exclusive. Usually there are a few thousand instructions in your test
loop (for a significant test). The regions that can fail are usually
10's of instructions, sometimes 100's. If you are able to push
randomness, how many times do you need to reschedule one thread to hit a
potential problem. Given cache latencies, pre-emption from other
threads, program randomness (like memory allocation variances) you can
achieve pretty close to full coverage of every possible race condition
in about 10 seconds of testing. There are some systematic start-up
effects that may not be found, but you mitigate that by running
automated testing. (In my shop, we run unit tests on the build machine
around the clock - all the time.)

So that leaves us with the level of confidence point. You can't achieve
perfect testing all the time, but you can achieve high level of
confidence testing all of the time.

It does require a true multi processor system to test adequately. I
have found a number of problems that almost always fail on a true MP
system that hardly ever fail on a SP system. Very rarely have I found
problems on 4 processor or more systems that were not also found on a 2
processor system, although, I would probably spend the money on a 4 core
CPU for developer systems today just to add more levels of confidence.

In practice, I have never seen a failure in the wild that could not be
discovered with a properly crafted MC+MT test.

So to truly get the coverage you want, the test needs to perform as much
randomness which means running more threads than processors and pushing
random inputs. Then run these tests all the time (after every automated
build) and make it so it stops dead when there is a problem discovered
so you can debug the issue at the point of failure. (Which is one of the
reasons I don't like exceptions thrown when a programming error is
found. It helps immensely to see the complete context of the error in
finding the problem.)
>
My customers want to know what the code will do, and how much
development will cost, before they allocate the resources to
develope it. Which means that I have a requirements
specification which has to be met.
I have met very few customers that know what a spec is even if it
smacked them up the side of the head. Sad. Inevitably it leads to
pissed off customer.
May 6 '07 #53
James Kanze wrote:
....
>The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods).

The latest trend where? Certainly not in any company concerned
with good management, or quality software.
Look up TDD.
>
>In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.

And will not necessarily meet requirements, or even be useful.
Actually, it does meet the requirements by definition since the test
case demonstrates how the requirements should be met.

See my "log"ging example.
May 6 '07 #54
Ian Collins wrote:
Gianni Mariani wrote:
>Ian Collins wrote:

.... A good test developer has the knack of
>>thinking like a user, where a developer thinks like a developer.
My BS meter jut pegged. A developer had better think like a user or
they're a crappy developer IMHO.
While ideal, that can be difficult when the developer is part of a large
team working on a component of complex system. Sure everyone should
have some degree of domain knowledge, but it isn't always possible.
Many project teams I have worked on had a large number of contact staff
employed for their coding skill rather than product knowledge (I know, I
frequently was one!).

This was why in my shop, the test developers worked with the customer(s)
to design and implement the acceptance tests.
This is a very inefficient development model. It is somewhat outdated.
May 6 '07 #55
Gianni Mariani wrote:
>
I have met very few customers that know what a spec is even if it
smacked them up the side of the head.
Welcome to the club!
Sad. Inevitably it leads to pissed off customer.
Any agile process (XP, Scrum or whatever) is ideal for this situation.
This situation is one of the main reasons these processes have evolved.
The sort release cycles keep the customer in the loop and give them the
flexibility to change their mind without incurring significant rework
costs. I use a one week cycle with one particularly indecisive client!

--
Ian Collins.
May 6 '07 #56
Gianni Mariani wrote:
Ian Collins wrote:
>Gianni Mariani wrote:
>>Ian Collins wrote:

.... A good test developer has the knack of
thinking like a user, where a developer thinks like a developer.
My BS meter jut pegged. A developer had better think like a user or
they're a crappy developer IMHO.
While ideal, that can be difficult when the developer is part of a large
team working on a component of complex system. Sure everyone should
have some degree of domain knowledge, but it isn't always possible.
Many project teams I have worked on had a large number of contact staff
employed for their coding skill rather than product knowledge (I know, I
frequently was one!).

This was why in my shop, the test developers worked with the customer(s)
to design and implement the acceptance tests.

This is a very inefficient development model. It is somewhat outdated.
Which, the first paragraph or the second?

--
Ian Collins.
May 6 '07 #57
James Kanze wrote:
On May 6, 1:27 am, Gianni Mariani <gi3nos...@mariani.wswrote:
>Pete Becker wrote:
>...
>>Yup. Typical developer-written test: I don't understand testing well
enough to do it right, so I'll do something random and hope to hit a
problem. <g>
>I have yet to meet a "test" developer that can beat the monte carlo test
for coverage.
>OK - I agree, there are cases where a monte-carlo test will never be
able to test adequately, but as a rule, it is better to have a MC test
than not. I have uncovered more legitimate problems from the MC test
than from carefully crafted tests.

Which proves that you don't have anyone who knows how to write
tests. A carefully crafted test will, by definition, find any
problem that a MC test will find.
We will have to agree to disagree on this.

I have one anecdotal evidence which suggests that no-one is capable of
truly foreseeing the full gamut of issues that can be found in a well
designed MC test.

A pass on an MC test raises the level of confidence which is always a
good thing.
>
In my experience, the main use of MC tests is to detect when
your tests aren't carefully crafted. Just as the main use of
testing is to validate your process---anytime a test reveals an
error, it is a sign that there is a problem in the process, and
that the process needs improvement.
If I read between the lines here, I think you're saying that we need
test developers to conceive every kind of possible failure. I have yet
to meet anyone who could do that consistently and I have been developing
software for a very long time.

I don't think your premise (if I read it correctly) is achievable.

I lean toward getting making the computer do as much work as possible
because it is much more consistent than a developer. (no problems with
headaches.) Case in pount, if you look at MakeXS it's as simple as put
a cpp file in a folder and running "make" - header files are found
automatically for you, the idea being make the development environment
as easy as possible.

Again, I am not saying that the MC test is the only test you need to
write. I am, however, making the observation that I have yet to meet
anyone that can find all the problems found by a well crafted MC test.

Said another way, there is a large set in the intersection of the issues
found in an MC test and the issues found by a competent test developer.
I'd rather the competent test developer push the envelope on tests
that a well crafted MC test can't find (i.e. very systematic edge cases)
and let the MC test do the hard work on the rest.

i.e.

+--------------------------------------------+
| MC Test Discoverable Set |
| +--------------------------------------+--------+
| | Intersection of Test Dev + MC Test | |
| | Discoverable set | |
+-----+--------------------------------------+ |
| Test Dev Discoverable set |
+-----------------------------------------------+
May 6 '07 #58
Ian Collins wrote:
Gianni Mariani wrote:
>I have met very few customers that know what a spec is even if it
smacked them up the side of the head.

Welcome to the club!
>Sad. Inevitably it leads to pissed off customer.

Any agile process (XP, Scrum or whatever) is ideal for this situation.
This situation is one of the main reasons these processes have evolved.
The sort release cycles keep the customer in the loop and give them the
flexibility to change their mind without incurring significant rework
costs. I use a one week cycle with one particularly indecisive client!
Yes - I've done it. Short release cycles - I invented them. Management
still fsck's up all the time, every time.

May 6 '07 #59
Gianni Mariani wrote:
Ian Collins wrote:
>Gianni Mariani wrote:
>>I have met very few customers that know what a spec is even if it
smacked them up the side of the head.

Welcome to the club!
>>Sad. Inevitably it leads to pissed off customer.

Any agile process (XP, Scrum or whatever) is ideal for this situation.
This situation is one of the main reasons these processes have evolved.
The sort release cycles keep the customer in the loop and give them the
flexibility to change their mind without incurring significant rework
costs. I use a one week cycle with one particularly indecisive client!

Yes - I've done it. Short release cycles - I invented them. Management
still fsck's up all the time, every time.
You should either educate them, or be them!

--
Ian Collins.
May 6 '07 #60
Ian Collins wrote:
Gianni Mariani wrote:
>Ian Collins wrote:
>>Gianni Mariani wrote:
Ian Collins wrote:

.... A good test developer has the knack of
thinking like a user, where a developer thinks like a developer.
My BS meter jut pegged. A developer had better think like a user or
they're a crappy developer IMHO.

While ideal, that can be difficult when the developer is part of a large
team working on a component of complex system. Sure everyone should
have some degree of domain knowledge, but it isn't always possible.
Many project teams I have worked on had a large number of contact staff
employed for their coding skill rather than product knowledge (I know, I
frequently was one!).

This was why in my shop, the test developers worked with the customer(s)
to design and implement the acceptance tests.
This is a very inefficient development model. It is somewhat outdated.

Which, the first paragraph or the second?
first and second.

I don't see a practical distinction between tester, developer and
designer. While the overall design of the system needs an "architect",
the job of the architect is to provide a framework that inevitably needs
extensibility.
May 6 '07 #61
Gianni Mariani wrote:
Ian Collins wrote:
>Gianni Mariani wrote:
>>Ian Collins wrote:
Gianni Mariani wrote:
Ian Collins wrote:
>
.... A good test developer has the knack of
>thinking like a user, where a developer thinks like a developer.
My BS meter jut pegged. A developer had better think like a user or
they're a crappy developer IMHO.
>
While ideal, that can be difficult when the developer is part of a
large
team working on a component of complex system. Sure everyone should
have some degree of domain knowledge, but it isn't always possible.
Many project teams I have worked on had a large number of contact staff
employed for their coding skill rather than product knowledge (I
know, I
frequently was one!).

This was why in my shop, the test developers worked with the
customer(s)
to design and implement the acceptance tests.
This is a very inefficient development model. It is somewhat outdated.

Which, the first paragraph or the second?

first and second.

I don't see a practical distinction between tester, developer and
designer. While the overall design of the system needs an "architect",
the job of the architect is to provide a framework that inevitably needs
extensibility.
The "architect" is the development team.

Automated acceptance tests (some call them *Customer* acceptance) tests
are a critical part of any product development process and someone has
to design and develop them. That someone can either be the customer, or
testers working as their proxy.

--
Ian Collins.
May 6 '07 #62
Ian Collins wrote:
Gianni Mariani wrote:
>Ian Collins wrote:
>>Gianni Mariani wrote:
I have met very few customers that know what a spec is even if it
smacked them up the side of the head.
Welcome to the club!

Sad. Inevitably it leads to pissed off customer.
Any agile process (XP, Scrum or whatever) is ideal for this situation.
This situation is one of the main reasons these processes have evolved.
The sort release cycles keep the customer in the loop and give them the
flexibility to change their mind without incurring significant rework
costs. I use a one week cycle with one particularly indecisive client!
Yes - I've done it. Short release cycles - I invented them. Management
still fsck's up all the time, every time.
You should either educate them, or be them!
Be them is the next challenge. Drowning in BS... I tell ya.
May 6 '07 #63
Ian Collins wrote:
....
The "architect" is the development team.

Automated acceptance tests (some call them *Customer* acceptance) tests
are a critical part of any product development process and someone has
to design and develop them. That someone can either be the customer, or
testers working as their proxy.
The customer is always ill defined.

The products I have worked on most recently are consumer applications
(p2p). Product success is adoption by millions of customers. There is
no hard and fast "acceptance" test. The marketing team had no idea, I
ended up resigning but that's another story.

G
May 6 '07 #64
Gianni Mariani wrote:
Ian Collins wrote:
....
>The "architect" is the development team.

Automated acceptance tests (some call them *Customer* acceptance) tests
are a critical part of any product development process and someone has
to design and develop them. That someone can either be the customer, or
testers working as their proxy.

The customer is always ill defined.
That's why it is essential to have proxy. For a consumer product, that
should be the internal product owner.

--
Ian Collins.
May 6 '07 #65
Ian Collins wrote:
Gianni Mariani wrote:
>Ian Collins wrote:
....
>>The "architect" is the development team.

Automated acceptance tests (some call them *Customer* acceptance) tests
are a critical part of any product development process and someone has
to design and develop them. That someone can either be the customer, or
testers working as their proxy.
The customer is always ill defined.
That's why it is essential to have proxy. For a consumer product, that
should be the internal product owner.
Finding a good dev is costly. Finding a good marketing/customer proxy
.... priceless.

The only place I have ever found marketeers that knew what they were
doing was at WebTV (before it became MS).

Most of the successful companies I have worked with (other than WebTV)
were a success despite themselves.

There was one failed company that failed because the market disappeared
(web radio). Yes, it still exists but there is no money (not much
money) in it.

Other failed companies were mainly because the product that was
requested was never the product that the company needed and management
was never capable of comprehending what they could really do with the
product they had.

Developing software is relatively easy. A very good project manage once
said, software is just a set of decisions (I know I messed that one up).
The inference being the more quickly you make the right decisions, the
quicker the product is developed. Hence, if your management team is
incapable of making decisions, they're not doing their job. Similarly,
if your dev team is incapable of making decisions, it's not doing their
job. I've never been accused of not making decisions.

May 6 '07 #66
Ian Collins wrote:
There are plenty of situations where a Monte Carlo test isn't
appropriate or even possible. A good test developer has the knack of
thinking like a user, where a developer thinks like a developer.
That's only a part of testing. A good tester thinks about how to make
the code fail, and aggressively tries to break it.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 6 '07 #67
On May 6, 3:03 am, Gianni Mariani <gi3nos...@mariani.wswrote:
James Kanze wrote:
...
Yes, but nobody but an idiot would pay you for such a thing.
Thread safety, to site but the most obvious example, isn't
testable, so you just ignore it?
Common misconception.
1. Testability of code is a primary objective. (i.e. code that can't be
tested is unfit for purpose)
2. Any testing (MT or not) is about a level of confidence, not absoluteness.
I have discovered that MT test cases that push the limits of the code
using random input does provide sufficient coverage to produce a level
of confidence that makes the target "testable".
Then write it up, and publish it, because you have obviously
struck on something new, which no one else has been able to
measure. But you must mean something else by Monte Carlo
testing than has been meant in the past. Because the
probability of finding an error by just throwing random data at
a problem is pretty low for any code which has passed code
review.
If you consider what happens when you have multiple processors
interacting randomly in a consistent system, you end up testing more
possibilities than can present themselves in a more systematic system.
However, with threading, it's not really systematic because external
events cause what would normally be systematic to be random. Now
consider what happens in a race condition failure. This normally
happens when two threads enter sections of code that should be mutually
exclusive. Usually there are a few thousand instructions in your test
loop (for a significant test). The regions that can fail are usually
10's of instructions, sometimes 100's.
The regions that can fail are often of the order of 2 or 3
machine instructions. In a block of several million. And in
some cases, the actual situation an only occur less often than
that: there is a threading error in the current implementation
of std::basic_string, in g++, but I've yet to see a test program
which will trigger it.
If you are able to push randomness, how many times do you need
to reschedule one thread to hit a potential problem.
Practically an infinity.
Given cache latencies, pre-emption from other threads, program
randomness (like memory allocation variances) you can achieve
pretty close to full coverage of every possible race condition
in about 10 seconds of testing. There are some systematic
start-up effects that may not be found, but you mitigate that
by running automated testing. (In my shop, we run unit tests
on the build machine around the clock - all the time.)
So perhaps, after a couple of centuries, you can say that your
code is reliable.
So that leaves us with the level of confidence point. You can't achieve
perfect testing all the time, but you can achieve high level of
confidence testing all of the time.
Certainly. Testing can only prove the existence of errors.
Never the absense. Well run shops don't count on testing,
because of this. (That doesn't mean that they don't test. Just
that they don't count on testing, alone, to ensure quality.)
It does require a true multi processor system to test adequately. I
have found a number of problems that almost always fail on a true MP
system that hardly ever fail on a SP system. Very rarely have I found
problems on 4 processor or more systems that were not also found on a 2
processor system, although, I would probably spend the money on a 4 core
CPU for developer systems today just to add more levels of confidence.
In practice, I have never seen a failure in the wild that could not be
discovered with a properly crafted MC+MT test.
And I have.

[...]
My customers want to know what the code will do, and how much
development will cost, before they allocate the resources to
develope it. Which means that I have a requirements
specification which has to be met.
I have met very few customers that know what a spec is even if it
smacked them up the side of the head. Sad. Inevitably it leads to
pissed off customer.
Regrettably, it is often up to the vendor to define what the
customer actually needs. But that doesn't mean not writing
specifications.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 6 '07 #68
On May 6, 3:31 am, Ian Collins <ian-n...@hotmail.comwrote:
Gianni Mariani wrote:
I have met very few customers that know what a spec is even if it
smacked them up the side of the head.
Welcome to the club!
Sad. Inevitably it leads to pissed off customer.
Any agile process (XP, Scrum or whatever) is ideal for this situation.
If your goal is to rip off the customer, yes.
This situation is one of the main reasons these processes have evolved.
The sort release cycles keep the customer in the loop and give them the
flexibility to change their mind without incurring significant rework
costs.
Prototyping and interacting with the customer are nothing new.
They've been part of every software development process for the
last 30 or 40 years.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 6 '07 #69
On May 6, 3:34 am, Gianni Mariani <gi3nos...@mariani.wswrote:
Short release cycles - I invented them.
You must be significantly older than me, then, because they were
the rule when I was learning software development, in the
1970's. (In fact, they were the rule where my father worked,
in the 1960's.)

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 6 '07 #70
On May 6, 3:12 am, Ian Collins <ian-n...@hotmail.comwrote:
James Kanze wrote:
I've worked with the people in charge. We evaluated the
procedure, and found that it simply didn't work. Looking at
other companies as well, none practicing eXtreme Programming
seem to be shipping products of very high quality. In fact, the
companies I've seen using it generally don't have the mechanisms
in place to actually measure quality or productivity, so they
don't know what the impact was.
We certainly did - field defect reports and the internal cost of
correcting them.
So what were the before and after measures?

You should at least publish this, since to date, all published
hard figures (as opposed to annecdotal evidence) goes in the
opposite sence. (For that matter, the descriptions of eXtreme
Programming that I've seen didn't provide a means of actually
measuring anything.)
When I actually talk to the engineers involved, it turns out
that e.g. they weren't using any accepted means of achieving
quality before. It's certain that adopting TDD will improve
things if there was no testing what so ever previously.
Similarly, pair programming is more cost efficient that never
letting a second programmer look at, or at least understand,
another programmer's code, even if it is a magnitude or more
less efficient than a well run code review.
Have you tried it? Not having to hold code reviews was one of the
biggest real savings for us.
Yes, we tried it. It turns out the effective code (and design)
review is the single most important element of producing quality
code. (I have done projects where we eliminated testing
entirely. Without any really significant loss of quality. But
the process we had in place was already very well developped; I
wouldn't recommend such as a general rule.)
Compared to
established good practices, however, most of the suggestions in
eXtreme Programming represent a step backwards.
That's your opinion and you are entitled to it.
Actually, it's not an opinion. It's a result of concrete
measurement.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 6 '07 #71
On May 6, 3:05 am, Gianni Mariani <gi3nos...@mariani.wswrote:
James Kanze wrote:
...
The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods).
The latest trend where? Certainly not in any company concerned
with good management, or quality software.
Look up TDD.
I'm familiar with the theory. Regretfully, it doesn't work out
in practice.
In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
And will not necessarily meet requirements, or even be useful.
Actually, it does meet the requirements by definition since the test
case demonstrates how the requirements should be met.
Bullshit. I've seen just too many cases of code which is wrong,
but for which no test suite is capable of reliably triggering
the errors.
See my "log"ging example.
You mean where you confused the mathematical function "log()"
with a function used to log error messages?

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
May 6 '07 #72
On May 6, 3:30 am, Gianni Mariani <gi3nos...@mariani.wswrote:
James Kanze wrote:
On May 6, 1:27 am, Gianni Mariani <gi3nos...@mariani.wswrote:
Pete Becker wrote:
...
>Yup. Typical developer-written test: I don't understand testing well
enough to do it right, so I'll do something random and hope to hit a
problem. <g>
I have yet to meet a "test" developer that can beat the monte carlo test
for coverage.
OK - I agree, there are cases where a monte-carlo test will never be
able to test adequately, but as a rule, it is better to have a MC test
than not. I have uncovered more legitimate problems from the MC test
than from carefully crafted tests.
Which proves that you don't have anyone who knows how to write
tests. A carefully crafted test will, by definition, find any
problem that a MC test will find.
We will have to agree to disagree on this.
There's nothing to disagree with. It's a basic definition. If
a MC test finds the error, and a hand crafted test doesn't, the
hand crafted test isn't well designed or carefully done.
I have one anecdotal evidence which suggests that no-one is capable of
truly foreseeing the full gamut of issues that can be found in a well
designed MC test.
I have more than anecdotal evidence that there are significant
errors which will slip through MC testing. Admittedly, the
most significant ones also slip through the most carefully
crafted tests as well. It is, in fact, impossible to write a
test for them which will reliably fail.

This is why no shop serious about quality will rely solely on
testing.
A pass on an MC test raises the level of confidence which is always a
good thing.
Certainly. It's just the often, generating enough random data
is more effort than doing things correctly, and beyond a very
low level, doesn't raise the confidence level very much. If we
take Pete's example of a log() function, testing with a million
random values doesn't really give me much more confidence than
testing with a hundred, and both give significantly less
confidence than a good code review, accompanied by testing with
a few critical values. (Which values are critical, of course,
being determined by the code review.)
In my experience, the main use of MC tests is to detect when
your tests aren't carefully crafted. Just as the main use of
testing is to validate your process---anytime a test reveals an
error, it is a sign that there is a problem in the process, and
that the process needs improvement.
If I read between the lines here, I think you're saying that we need
test developers to conceive every kind of possible failure. I have yet
to meet anyone who could do that consistently and I have been developing
software for a very long time.
The probability of a single programmer missing something is
close to 1, I agree. The probability of several programmers
missing the same thing, on the other hand, is close to 0. And
the probability of a random test hitting the single input value
for which the code doesn't work is 1/N, where N is the number of
input values. If N is small, exhaustive testing is obviously a
perfect solution. In most cases, however, N is large enough
that a significant sampling by random selection is simply not
possible in a reasonable time.
I don't think your premise (if I read it correctly) is achievable.
It seems to work in practice. At least one company is (or was)
at SEI level 5, and even companies at level 3 regularly turn out
software with less than one error per 100 KLoc, going into
integration testing.
I lean toward getting making the computer do as much work as possible
because it is much more consistent than a developer.
The problem is that the computer only does what you tell it to
do. If you don't tell it to test such and such a feature, it
won't. If you don't tell it what the critical values are
(limits, etc.), then it is unlikely that it will hit them by
chance.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 6 '07 #73
Ian Collins wrote:
>A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run.
Unless the test are written first!
When their focus is on getting the code to pass tests, of course the tests
will be stronger, and the code will be "weaker", in the math sense.

The fun hits the fan when you make a pro tester think they must defend their
job from TDD. If I were a pro tester, doing the sick soak tests and metrics
stuff, I would naturally prefer my developers use TDD, so I don't waste time
installing the test rig and making all the code accessible.

--
Phlip
http://www.oreilly.com/catalog/9780596510657/
"Test Driven Ajax (on Rails)"
assert_xpath, assert_javascript, & assert_ajax
May 6 '07 #74
James Kanze wrote:
Any agile process (XP, Scrum or whatever) is ideal for this situation.
If your goal is to rip off the customer, yes.
I fail to see how a process that gives the customer what they ask for in 1
week iterations could get away with that. Explain?
Prototyping and interacting with the customer are nothing new.
Nobody said they were. However, most students learning software engineering
never hear of them in any textbook, and many bosses like to insulate their
developers from their clients, to "control the message". So the more ways we
can find to advocate direct collaboration, the better.
Look up TDD.
I'm familiar with the theory. Regretfully, it doesn't work out
in practice.
And if I asked why you think that, I suspect I'd be treated to a game of
"move the goal post"...

--
Phlip
http://www.oreilly.com/catalog/9780596510657/
"Test Driven Ajax (on Rails)"
assert_xpath, assert_javascript, & assert_ajax
May 6 '07 #75
Pete Becker wrote:
Nope. Test driven design cannot account for the possibility that a
function will use an internal buffer that holds N bytes, and has to
handle the edges of that buffer correctly. The specification says
nothing about N, just what output has to result from what input.
You slipped from "write tests before code" to "write tests based only on the
specifications before writing code".

Even under high amnesia situations, that N-byte buffer must be one function
call away from its test driver. This improves the odds you notice it and
question if production code can overflow it. And if you expressly treat the
test cases as documenting the code, then you should write tests which
document that buffer.

All methodologies still require review...

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #76
I never underestimate the ingenuity of good testers.

Ian, did someone play "fallacy of excluded middle" on you?

;-)

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #77
Bo Persson wrote:
So Pete will pass your first test with "return 1;".

How many more tests do you expect to write, before you are sure that
Pete's code is always no more than one unit off in the last decimal?
That technique is "fake it til you make it". It's not the core principle of
TDD, but sometimes it's surprisingly powerful. You pass a test with obtusely
simple code, then write the next test to force that code to upgrade in the
correct direction. And you frequently refactor to simplify the code, even if
it's fake.

The surprising part comes when you stop faking. You pass your new test cases
either by trivially extending the existing design, or even by doing nothing.
This system can rapidly produce the Open Closed Principle for a given set of
inputs.

If, for a given situation, it's _not_ surprisingly powerful, you _will_
learn that very early!

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #78
Pete Becker wrote:
When I was a QA manager I'd bristle whenever any developer said they'd
"let testers" do something. That's simply wrong. Testing is not an adjunct
to development. Testing is a profession, with technical challenges that
differ from, and often exceed in complexity, those posed by development.
The skills required to do it well are vastly different, but no less
sophisticated, than those needed to write the product itself. Most
developers think they can write good tests, but in reality, they simply
don't have the right skills, and test suites written by developers are
usually naive. A development manager who gives testers "complete freedom"
is missing the point: that's not something the manager can give or take,
it's an essential part of effective testing.
Nobody said to fire all the testers. You made that up, then replied to your
presumption.

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #79
Gianni Mariani wrote:
The most successful teams I worked with were teams that wrote their own
tests.
That's why the testers are not far away, in time, space, or timezone. They
should be in the same lab.

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #80
Pete Becker wrote:
Yup. Typical developer-written test: I don't understand testing well
enough to do it right, so I'll do something random and hope to hit a
problem. <g>
Nobody said the developers would write soak tests! What part of "write a
thing incorrectly called a 'test' that fails because the next line of code
is not there" are you stuck on?

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #81
James Kanze wrote:
I find it makes the design and code process easier and way more fun. No
more debugging sessions!
Until the code starts actually being used.
That's _why_ people who use TDD tend to practice Daily Deployment. They
actually use the code immediately.

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #82
Ian Collins wrote:
If you code test first (practice Test Driven Design/Development), you
don't have to do coverage analysis because your code has been written to
pass the tests.
Each line has coverage. That's still not the ultimate coverage, where tests
cover every path through the entire program. But the effect on code makes
such coverage less important. The tests force the code to be simpler,
because it must pass simple tests.

--
Phlip
http://www.oreilly.com/catalog/9780596510657/
"Test Driven Ajax (on Rails)"
assert_xpath, assert_javascript, & assert_ajax
May 6 '07 #83
James Kanze wrote:
Note that the two steps above work together. The people doing the code
review also review the tests, and verify that your choice of test cases
was appropriate for the code you wrote.
Oh, you must have read Rule #42 in the Official TDD Handbook:

"After writing simple code to pass the test, you must then immediately
release your code, without letting anyone review it or add more tests to
it."

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #84
Ian Collins wrote:
No, The first embedded product my team developed using TDD has had about
half a dozen defects reported in 5 years and several thousand units of
field use. They were all in bits of the code with poor tests.
Ain't it sad when former Thought Leaders reach the next bend in the road of
progress, and we discover how ossified their geriatric brains have become?
;-)

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #85
>The problem with Google:

They use TDD up the wazoo.

It sure shows in their pitiful quality, long time to market, customer
disatisfaction, and snowed executives, huh?

;-)

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #86
Phlip wrote:
Ian Collins wrote:
>No, The first embedded product my team developed using TDD has had about
half a dozen defects reported in 5 years and several thousand units of
field use. They were all in bits of the code with poor tests.

Ain't it sad when former Thought Leaders reach the next bend in the road of
progress, and we discover how ossified their geriatric brains have become?
;-)
I had wondered where you were hiding!

--
Ian Collins.
May 6 '07 #87
I had wondered where you were hiding!

RoR, holmes.

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #88
James Kanze wrote:
On May 6, 3:31 am, Ian Collins <ian-n...@hotmail.comwrote:
>Gianni Mariani wrote:
>>I have met very few customers that know what a spec is even if it
smacked them up the side of the head.
>Welcome to the club!
>>Sad. Inevitably it leads to pissed off customer.
>Any agile process (XP, Scrum or whatever) is ideal for this situation.

If your goal is to rip off the customer, yes.
So by helping them to get what they really wanted, rather than forcing
them to commit to what they thought the wanted, I'm ripping them off?
The person I'm ripping off is me, I'm doing my self out of all the bug
fixing and rework jobs.

Man you have a strange view of customer focused development.
>This situation is one of the main reasons these processes have evolved.
The sort release cycles keep the customer in the loop and give them the
flexibility to change their mind without incurring significant rework
costs.

Prototyping and interacting with the customer are nothing new.
They've been part of every software development process for the
last 30 or 40 years.
There's a huge difference between demonstrating a prototype and
delivering production quality code every week or so.

--
Ian Collins.
May 6 '07 #89
James Kanze wrote:
....
>
There's nothing to disagree with. It's a basic definition. If
a MC test finds the error, and a hand crafted test doesn't, the
hand crafted test isn't well designed or carefully done.
We disagree on the practicality of your claim.
>
>I have one anecdotal evidence which suggests that no-one is capable of
truly foreseeing the full gamut of issues that can be found in a well
designed MC test.

I have more than anecdotal evidence that there are significant
errors which will slip through MC testing. Admittedly, the
most significant ones also slip through the most carefully
crafted tests as well.
.... ok, so we don't disagree.
... It is, in fact, impossible to write a
test for them which will reliably fail.

This is why no shop serious about quality will rely solely on
testing.
Can we limit the scope of discussion to unit testing. If you want to go
to complete product life-cycle we will be here forever.
>
>A pass on an MC test raises the level of confidence which is always a
good thing.

Certainly. It's just the often, generating enough random data
is more effort than doing things correctly, and beyond a very
low level, doesn't raise the confidence level very much.
OK -- we'll have to disagree again. I've been able to find bugs using
MC tests that were never found by "well crafted tests".
... If we
take Pete's example of a log() function, testing with a million
random values doesn't really give me much more confidence than
testing with a hundred, and both give significantly less
confidence than a good code review, accompanied by testing with
a few critical values. (Which values are critical, of course,
being determined by the code review.)
Right. Very good. So this is your proof that MC tests are all bad ?
May 6 '07 #90
Ian Collins wrote:
Have you tried it? Not having to hold code reviews was one of the
biggest real savings for us.
What you are up against here is Kanze is one of the most aggressive and
competent reviewers out there, and he leads by reviewing. This explains his
devastating accuracy on C++ questions. So by claiming two dumb people can
get by with pairing and TDD, instead of submitting their code to him for
review, you are taking him out of his Comfort Zone.

About those other companies, yes many losers call themselves XP. If we could
get in touch with them, we would ... review their process. Pairing? TDD?
Frequent Releases? Common Workspace? etc.

Many companies call themselves XP as an excuse to not write any
documentation...

Oh, and my day job uses pure XP. Our executives recently let my supervisor
know that the programming department was the group they had the _least_
issues with. And we do code (and test) reviews after each feature.

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #91
James Kanze wrote:
On May 6, 3:05 am, Gianni Mariani <gi3nos...@mariani.wswrote:
>James Kanze wrote:
>...
>>>The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods).
>>The latest trend where? Certainly not in any company concerned
with good management, or quality software.
>Look up TDD.

I'm familiar with the theory. Regretfully, it doesn't work out
in practice.
What part ?
>
>>>In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
>>And will not necessarily meet requirements, or even be useful.
>Actually, it does meet the requirements by definition since the test
case demonstrates how the requirements should be met.

Bullshit. I've seen just too many cases of code which is wrong,
but for which no test suite is capable of reliably triggering
the errors.
Example ?
>
>See my "log"ging example.

You mean where you confused the mathematical function "log()"
with a function used to log error messages?
Cute !
May 6 '07 #92
James Kanze wrote:
On May 6, 3:34 am, Gianni Mariani <gi3nos...@mariani.wswrote:
>Short release cycles - I invented them.

You must be significantly older than me, then, because they were
the rule when I was learning software development, in the
1970's. (In fact, they were the rule where my father worked,
in the 1960's.)
What does short mean to you ?
May 6 '07 #93
James Kanze wrote:
On May 6, 3:12 am, Ian Collins <ian-n...@hotmail.comwrote:
>James Kanze wrote:
>>I've worked with the people in charge. We evaluated the
procedure, and found that it simply didn't work. Looking at
other companies as well, none practicing eXtreme Programming
seem to be shipping products of very high quality. In fact, the
companies I've seen using it generally don't have the mechanisms
in place to actually measure quality or productivity, so they
don't know what the impact was.
>We certainly did - field defect reports and the internal cost of
correcting them.

So what were the before and after measures?

You should at least publish this, since to date, all published
hard figures (as opposed to annecdotal evidence) goes in the
opposite sence. (For that matter, the descriptions of eXtreme
Programming that I've seen didn't provide a means of actually
measuring anything.)
I don't have the exact before figures, but there were dozens of bugs in
the system for the previous version of the product and they took a
significant amount of developer and test time. The lack of unit tests
made the code extremely hard to fix without introducing new bugs.
Comprehensive unit tests are the only way to break out of this cycle.

We didn't bother tracking bugs for the replacement product, there were
so few of them and due to their minor nature, they could be fixed within
a day of being reported. We had about 6 in the first year.
>>When I actually talk to the engineers involved, it turns out
that e.g. they weren't using any accepted means of achieving
quality before. It's certain that adopting TDD will improve
things if there was no testing what so ever previously.
Similarly, pair programming is more cost efficient that never
letting a second programmer look at, or at least understand,
another programmer's code, even if it is a magnitude or more
less efficient than a well run code review.
>Have you tried it? Not having to hold code reviews was one of the
biggest real savings for us.

Yes, we tried it. It turns out the effective code (and design)
review is the single most important element of producing quality
code.
Sounds like you weren't pairing.
>
>>Compared to
established good practices, however, most of the suggestions in
eXtreme Programming represent a step backwards.
>That's your opinion and you are entitled to it.

Actually, it's not an opinion. It's a result of concrete
measurement.
So you practiced full on XP for a few months and measured the results?

--
Ian Collins.
May 6 '07 #94
James Kanze wrote:
On May 6, 3:05 am, Gianni Mariani <gi3nos...@mariani.wswrote:
>James Kanze wrote:
>...
>>>The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods).
>>The latest trend where? Certainly not in any company concerned
with good management, or quality software.
>Look up TDD.

I'm familiar with the theory. Regretfully, it doesn't work out
in practice.
Works well for me. Again, it's clear you have never tried it.
>>>In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
>>And will not necessarily meet requirements, or even be useful.
>Actually, it does meet the requirements by definition since the test
case demonstrates how the requirements should be met.

Bullshit. I've seen just too many cases of code which is wrong,
but for which no test suite is capable of reliably triggering
the errors.
Now that I'd like to see.

--
Ian Collins.
May 6 '07 #95
Gianni Mariani wrote:
>I'm familiar with the theory. Regretfully, it doesn't work out
in practice.

What part ?
The rules of TDD are "write a simple test and write code to pass the test".

But some people are very smart, and know better than to do "simple" things.
So they insist on "write a QA-quality test case that completely constrains
production code before writing it".

When that doesn't work, they blame TDD.

--
Phlip
http://www.oreilly.com/catalog/9780596510657/
"Test Driven Ajax (on Rails)"
assert_xpath, assert_javascript, & assert_ajax
May 6 '07 #96
Ian Collins wrote:
>Bullshit. I've seen just too many cases of code which is wrong,
but for which no test suite is capable of reliably triggering
the errors.

Now that I'd like to see.
He will play the complexity card: Multi threading, encryption, security,
etc.

Based, again, on insisting that "test" can only mean "high-level QA soak
test".

--
Phlip
http://www.oreilly.com/catalog/9780596510657/
"Test Driven Ajax (on Rails)"
assert_xpath, assert_javascript, & assert_ajax
May 6 '07 #97
Phlip wrote:
Pete Becker wrote:
>When I was a QA manager I'd bristle whenever any developer said they'd
"let testers" do something. That's simply wrong. Testing is not an adjunct
to development. Testing is a profession, with technical challenges that
differ from, and often exceed in complexity, those posed by development.
The skills required to do it well are vastly different, but no less
sophisticated, than those needed to write the product itself. Most
developers think they can write good tests, but in reality, they simply
don't have the right skills, and test suites written by developers are
usually naive. A development manager who gives testers "complete freedom"
is missing the point: that's not something the manager can give or take,
it's an essential part of effective testing.

Nobody said to fire all the testers. You made that up, then replied to your
presumption.
Agreed: nowhere has anyone said to fire all the testers. The text you
quote does not in any way address such a (non-existent) assertion.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 6 '07 #98
Pete Becker wrote:
Agreed: nowhere has anyone said to fire all the testers. The text you
quote does not in any way address such a (non-existent) assertion.
Okay. You are saying "developers can't write the best tests, so therefor
they shouldn't write any tests". Am I closer?

--
Phlip
http://flea.sourceforge.net/PiglegToo_1.html
May 6 '07 #99
Phlip wrote:
Ian Collins wrote:
>If you code test first (practice Test Driven Design/Development), you
don't have to do coverage analysis because your code has been written to
pass the tests.

Each line has coverage. That's still not the ultimate coverage, where tests
cover every path through the entire program. But the effect on code makes
such coverage less important. The tests force the code to be simpler,
because it must pass simple tests.
As a practical matter, you can't test every path through the code. You
get killed by combinatorial explosions.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 6 '07 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

14
by: | last post by:
Hi! I'm looking for unit-testing tools for .NET. Somthing like Java has --> http://www.junit.org regards, gicio
16
by: Greg Roberts | last post by:
Hi I want to place the tests needed in the code using attributes. There seems to be enough code snippets around for me to cover this. e.g. // Test cases, run these here on the function and...
4
by: sylcheung | last post by:
Hi, Does anyone has any suggestion for Unit Test Framework for c++ program? Is there any framework which let me use some script languages (e.g. python) to write the test cases to test c++...
5
by: VvanN | last post by:
hi, fellows I'd like to intruduce a new unit test framework for C++ freely available at: http://unit--.sourceforge.net/ It does not need bothering test registration, here is an example ...
6
by: Michael Bray | last post by:
I've just inherited a fairly large project with multiple classes. The developer also wrote a huge number of unit tests (using NUnit) to validate that the classes work correctly. However, I don't...
10
by: Brendan Miller | last post by:
What would heavy python unit testers say is the best framework? I've seen a few mentions that maybe the built in unittest framework isn't that great. I've heard a couple of good things about...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.