470,631 Members | 1,607 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 470,631 developers. It's quick & easy.

C++ implementation for C API ---- converting legacy C code to C++

[My apologies if this LONG posting is off-topic]
In this day and age, you never say no to any work that is thrown at you
---- so when I was offered this short-term contract to convert legacy C
code to C++, I did not say no. Personally I believed that it was a
somewhat futile exercise since one of the main requirements was for the
existing API (a functional interface written in C) to remain the same.
I would have much rathered that the mandate be ab-initio, but that was
not the case here. My client had a bad experience with OO, and they
wanted to re-tread this path very very carefully. They were convinced
that a "phased approach" to OO worked much better for them.

I tried to argue (although not very passionately) that functional and
OO design together provide a rather immiscible combination --- one
which nobody is happy with. However, for someone who was prepared to
dig trenches, I did not make a very big deal out of it. Since the API
had to remain the same, and there was not enough time for a detailed
use-case/OO analysis, here's how we got about this task.

1) Studied the current implementation of the APIs --- if it sounded
"coherent", and if it could be easily converted into a method on a
class, we went ahead and did that

2) If the API implementation did bot seem coherent, we chopped it into
coherent parts, and each of these parts was then converted into a
method on a class. To make sure that there was no proliferation of
classes, we kept track of all the classes that had been identified
during the process, and tried to reuse previously identified classes as
much as possible. (If I had the time and the mandate to build from
scratch, I would have let the basic object model emerge from a detailed
analysis of the probelm domain)

3) Some of the classes that we identified initially sounded like
controller classes ---- that had "or"s and "er"s in their name, and
they violated the "er-er" principle. For each such class we asked the
question: What does it do, whatever it has to do, to? For example, one
of the classes that we "discovered" was Configurator ----- and we asked
the question: What does the Configurator configure? The answer was the
Execution Environment ---- so we introduced a class named Environment.

4) All the global variables were replaced by the singleton design
pattern ----- for the most part, each global variable became a
singleton instance, but there were instances where a group of global
variables got tied to the same singleton instance.

5) All the API implementations became a single method invocation on an
object ---- any "fanout" to other methods on collaborating objects were
handled in the top level method only. In other words there was no
control logic or "chaining" of object method invocations left in the
API implementation. Even though we did not think of it as a big deal
initially, it did come in pretty handy when we were asked to capture
the new design in the Client's OO tool. On the sequence diagrams we
were able to show the buisness logic of the application as an actor and
the API function call as the interface entity. We were thus able to
clearly delineate between the functional and OO worlds ---- on the
"West" side of the interface entity we had the API function call, and
on the "East" side of the interface entity we had a method invocation
on a class. One of my distignushed colleagues had included some
control logic in the API implementation ---- instead of just a single
method call invocation on an object. During the review, he got
repeatedly thrashed by the question: Which entity provided this
functionality?

All in all I believe that we were able to provide a fairly satisfactory
result (B- in my own estimation) to our client. They seem to be
satisfied, although, truth be told, this is hardly how I would have
proceeded if I had the time and the mandate to start from scratch.

Did I miss anything obvious? Did I do anything blatantly wrong? Did
I overlook anything? Any feedback will be appreciated.

Masood

Aug 23 '05 #1
3 1636
masood.iqbal wrote:
[My apologies if this LONG posting is off-topic]
Apologize for cross-posting it. I set the followups to the only on-topic
newsgroup.
In this day and age, you never say no to any work that is thrown at you
---- so when I was offered this short-term contract to convert legacy C
code to C++, I did not say no.
Great. Switch the compiler to use C++, get it all to compile without syntax
errors, run PC-lint or similar on it, and you are done.
Personally I believed that it was a
somewhat futile exercise since one of the main requirements was for the
existing API (a functional interface written in C) to remain the same.
I would have much rathered that the mandate be ab-initio, but that was
not the case here.
What _result_ does the client expect? It had better not be a beautiful UML
diagram...
My client had a bad experience with OO
Ahh, now the plotters thicken.

What's wrong with the existing code? How can you _improve_ it, regardless of
its OOness.

All technologies are susceptible to abuse. A former programmer could have
abused your client with OO via any number of antipatterns. Learn what they
were, and learn their patterns, to look better than this programmer. That
should not be hard.
and they
wanted to re-tread this path very very carefully. They were convinced
that a "phased approach" to OO worked much better for them.
So, they want to use Waterfall? That generally makes OO worse.

Nobody writes OO code by planning an entire design, then copying it all to
code. Your remaining text appears to represent only planning, not live code.
I tried to argue (although not very passionately) that functional and
OO design together provide a rather immiscible combination --- one
which nobody is happy with.
Good thing you didn't bring in passion, because you are arguing about the
process used to design, not the result. OO has nothing to do with "draw a
diagram of your code and finish the diagram before writing the result."
However, for someone who was prepared to
dig trenches, I did not make a very big deal out of it. Since the API
had to remain the same
Then I repeat the question: What value will you add to this code? If it
works, leave it.
and there was not enough time for a detailed
use-case/OO analysis, here's how we got about this task.

1) Studied the current implementation of the APIs --- if it sounded
"coherent", and if it could be easily converted into a method on a
class, we went ahead and did that
So by "study" you mean "write unit tests", per /Working Effectively with
Legacy Code/ by Mike Feathers, right?
2) If the API implementation did bot seem coherent, we chopped it into
coherent parts, and each of these parts was then converted into a
method on a class. To make sure that there was no proliferation of
classes, we kept track of all the classes that had been identified
during the process, and tried to reuse previously identified classes as
much as possible. (If I had the time and the mandate to build from
scratch, I would have let the basic object model emerge from a detailed
analysis of the probelm domain)
And at each juncture here you wrote real code, right? Or is all this
"chopping" entirely in a UML drawing?
3) Some of the classes that we identified initially sounded like
controller classes ---- that had "or"s and "er"s in their name, and
they violated the "er-er" principle. For each such class we asked the
question: What does it do, whatever it has to do, to? For example, one
of the classes that we "discovered" was Configurator ----- and we asked
the question: What does the Configurator configure? The answer was the
Execution Environment ---- so we introduced a class named Environment.

4) All the global variables were replaced by the singleton design
pattern
Why?
----- for the most part, each global variable became a
singleton instance, but there were instances where a group of global
variables got tied to the same singleton instance.
That's still global. You could instead pass all globals into where they are
used as arguments.

I'm going to stop nitpicking, snip your text, and skip down to your
question.
Did I miss anything obvious? Did I do anything blatantly wrong? Did
I overlook anything? Any feedback will be appreciated.


Your code needs to be designed for testing. If your client really really
needs the feature "rewrite this code so it's easier to change and has fewer
bugs", then I would try this strategy:

Get CppUnit or a clone.

Write a test case for a legacy API action, and make it pass.

Write a matching test case for a new "OO" version of the API.

Get that test case to fail.

Write _just_enough_ of the new version to pass. Write low-quality code to
make it pass.

After all tests pass, inspect the quality. Seek any kind of duplication
(including things like too many former globals passed around as arguments),
and resolve that duplication by refactoring the code to improve its design.
"Refactor" means you make the smallest possible change that can return to
passing tests. This refactor step is when you "design", and you do it with
the best evidence your design works - passing tests.

Do not "design too much". Remove any line of code you can if the tests still
pass. The best way to design is incrementally, and with maximum feedback
from live code as you go. So over time the design will grow a little more
complex, but not as complex as a UML diagram could have pushed it.

Done right, irritations like globals and excess parameters should just
disappear.

Repeat for each legacy code action - write a sample unit test on the old
code, and a matching test on the new code. This procedure ("Extract
Algorithm Refactor") mines the legacy code for all its logical rules.

Done right, this procedure will be much faster, safer, and bug-free than
your planning phases would have been. If the tests fail unexpectedly, you
use Undo until they pass, then try again.

Push your code into a version controller each time it's demonstrably better.

--
Phlip
http://www.greencheese.org/ZeekLand <-- NOT a blog!!!
Aug 23 '05 #2
Hi Masood,
my 2 cents worth....I recently (2 years ago) converted some ETL
software from C to C++.......and I started with a blank piece of paper
and completely re-wrote the software.....in the end, I was much happier
with that result as the new version is FAR more flexible/powerful than
I could ever have written the C code.

One good example.....in the previous version which generated C code it
used to generate about 3,000 lines of code per a specific piece....in
the new version, I was able to collapse this to 10 lines of code
calling a class.

By migrating to C++ I was able to vastly improve productivity of
development, maintenance as well as functionality...

Of course, getting the best out of C++ depends on skills/ability and
design. You can write crappy code in any language.

As you client says 'they had a bad experience with OO'......they do not
understand they had a 'bad experience with poor development
staff'......I work in Data Warehousing, and you should see how many
people have had a 'bad experience with data warehousing' and not 'a bad
experience with poor data warehouse developers'... ;-)

In the end, well designed C++ can be very, very powerful....

I hope you client remains happy for some time......but perhaps they
will be wanting to have the code completely re-designed and re-written
into C++ one day, or perhaps they will just join the rest of the world
and 'buy a package'....LOL!!

Best Regards

Peter Nolan
www.peternolan.com

Aug 23 '05 #3
ma**********@lycos.com wrote:
[My apologies if this LONG posting is off-topic]
In this day and age, you never say no to any work that is thrown at you
---- so when I was offered this short-term contract to convert legacy C
code to C++, I did not say no. Personally I believed that it was a
somewhat futile exercise since one of the main requirements was for the
existing API (a functional interface written in C) to remain the same.
I would have much rathered that the mandate be ab-initio, but that was
not the case here. My client had a bad experience with OO, and they
wanted to re-tread this path very very carefully. They were convinced
that a "phased approach" to OO worked much better for them.
Makes sense.
I tried to argue (although not very passionately) that functional and
OO design together provide a rather immiscible combination --- one
which nobody is happy with. However, for someone who was prepared to
dig trenches, I did not make a very big deal out of it. Since the API
had to remain the same, and there was not enough time for a detailed
use-case/OO analysis, here's how we got about this task.

1) Studied the current implementation of the APIs --- if it sounded
"coherent", and if it could be easily converted into a method on a
class, we went ahead and did that

2) If the API implementation did bot seem coherent, we chopped it into
coherent parts, and each of these parts was then converted into a
method on a class. To make sure that there was no proliferation of
classes, we kept track of all the classes that had been identified
during the process, and tried to reuse previously identified classes as
much as possible. (If I had the time and the mandate to build from
scratch, I would have let the basic object model emerge from a detailed
analysis of the probelm domain)

3) Some of the classes that we identified initially sounded like
controller classes ---- that had "or"s and "er"s in their name, and
they violated the "er-er" principle. For each such class we asked the
question: What does it do, whatever it has to do, to? For example, one
of the classes that we "discovered" was Configurator ----- and we asked
the question: What does the Configurator configure? The answer was the
Execution Environment ---- so we introduced a class named Environment.
Very good approach. You were identifying responsibilities and grouping
code based on them.
4) All the global variables were replaced by the singleton design
pattern ----- for the most part, each global variable became a
singleton instance, but there were instances where a group of global
variables got tied to the same singleton instance.
That's often the safest thing to do. In C code there's an extremely
safe way of doing this called Encapsulate Global References.
5) All the API implementations became a single method invocation on an
object ---- any "fanout" to other methods on collaborating objects were
handled in the top level method only. In other words there was no
control logic or "chaining" of object method invocations left in the
API implementation. Even though we did not think of it as a big deal
initially, it did come in pretty handy when we were asked to capture
the new design in the Client's OO tool. On the sequence diagrams we
were able to show the buisness logic of the application as an actor and
the API function call as the interface entity. We were thus able to
clearly delineate between the functional and OO worlds ---- on the
"West" side of the interface entity we had the API function call, and
on the "East" side of the interface entity we had a method invocation
on a class. One of my distignushed colleagues had included some
control logic in the API implementation ---- instead of just a single
method call invocation on an object. During the review, he got
repeatedly thrashed by the question: Which entity provided this
functionality?

All in all I believe that we were able to provide a fairly satisfactory
result (B- in my own estimation) to our client. They seem to be
satisfied, although, truth be told, this is hardly how I would have
proceeded if I had the time and the mandate to start from scratch.

Did I miss anything obvious? Did I do anything blatantly wrong? Did
I overlook anything? Any feedback will be appreciated.

Masood

It sounds good to me. Too often people people attempt to treat projects
like this as if they were greenfield. Sometimes it comes off fine but
it can be very risky unless you have enough tests on the old system to
give you feedback and help you know what the behavior really is.

Once code goes out in the field, it becomes its own spec in a way.
People depend on the oddest quirks, and often replacement code is wrong
unless it faithfully duplicates all of that behavior.

One interesting thing to think about is why the work was a B- in your
estimation yet very satisfactory to the client. I think the reason is
because we always look to the ideal. It's great but imagine if your
doctor did that. Imagine him binding up your ankle for a sprain and
then saying "damn, with more time I could've made this guy an Olympic
swimmer." Truth be told, I'd like to have a doctor who thinks that way,
and it's great when we think like that as programmers, but that
shouldn't get in the way of noticing what's appropriate in a context and
doing it. We should give ourselves high marks for it.

So, were you able to get some tests in place? Did you have some to
start with?
Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com
Aug 25 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.