By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
457,916 Members | 1,316 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 457,916 IT Pros & Developers. It's quick & easy.

Compatible codes for both Visual Studio 2005 and gcc

P: n/a
Dear all,

I'm doing an experimental thing that the codes I wrote can be compiled
by both VS2005 and gcc. I saw somewhere on the web that I can use
compiler's pre-defined value something life "_cplusplus_" or
"_win32_". My question is is there any way that I can make codes that
can be compiled by the both beautiful compilers? Any comments or
suggestions would be much appreciated.

regards,

Sep 23 '07 #1
Share this Question
Share on Google+
9 Replies


P: n/a
Alexander Dong Back Kim wrote:
I'm doing an experimental thing that the codes I wrote can be compiled
by both VS2005 and gcc. I saw somewhere on the web that I can use
compiler's pre-defined value something life "_cplusplus_" or
"_win32_". My question is is there any way that I can make codes that
can be compiled by the both beautiful compilers? Any comments or
suggestions would be much appreciated.
I would do that by installing CygWin, to get a complete bash environment,
with its GCC. Then I'd use UnitTest++ to write lots of unit tests, and I'd
configure some scripts and Makefiles to compile and run all the tests each
time I changed the code. I always configure these scripts to trigger
automatically when I save the files in my editor. That reduces development
to generally making a few edits, saving the files, waiting for a positive
message in another window, and making more edits. All without mouse abuse,
or leaving the editor, or manual testing.

BTW this newsgroup is not qualified to discuss any of these things, beyond
their trivial overview. Each has its own forums where you'll get the best
answers.

Then use #ifdef WIN32 to flag the compiler-specific stuff, but it's best to
put that into separate libraries. That, plus the unit tests, help keep your
code decoupled.

--
Phlip
http://www.oreilly.com/catalog/9780596510657/
"Test Driven Ajax (on Rails)"
assert_xpath, assert_javascript, & assert_ajax
Sep 23 '07 #2

P: n/a
On Sep 23, 7:16 pm, "Phlip" <phlip...@yahoo.comwrote:
Alexander Dong Back Kim wrote:
I'm doing an experimental thing that the codes I wrote can be compiled
by both VS2005 and gcc. I saw somewhere on the web that I can use
compiler's pre-defined value something life "_cplusplus_" or
"_win32_". My question is is there any way that I can make codes that
can be compiled by the both beautiful compilers? Any comments or
suggestions would be much appreciated.
I would do that by installing CygWin, to get a complete bash environment,
with its GCC.
There are at least three "Unix-like" environments for Windows:
CygWin, UWin and MSys. From experience, I'd avoid CygWin. MSys
is probably the lightest weight, and the best solution for
someone only using it for compatibility issues, UWin gives a
more completely Unix look-and-feel, but doesn't integrate into
Windows as well. And CygWin is just a disaster on all levels.

But I'm not sure that that's the question (and if it was, it
wouldn't really be relevant here). The question is how to write
compatible C++, which will compile with both
compilers---presumably VC++ under Windows, and g++ under Linux
or Unix. The answer, of course, is to write standard C++, and
to avoid the system specific API's, encapsulating them when you
must use them.
Then I'd use UnitTest++ to write lots of unit tests, and I'd
configure some scripts and Makefiles to compile and run all
the tests each time I changed the code.
That's a different problem. I certainly can't imagine anyone
writing makefiles which didn't run the unit tests automatically
before making the component visible to other components. But
how to do this is more a problem of how to write portable
makefiles. It's a problem which isn't really on topic here, but
I might add that I've yet to find a solution, other than
installing the same make (GNU make) everywhere.
I always configure these scripts to trigger automatically when
I save the files in my editor.
You mean you can't save partially edits? I tend to save any
time I've written two or three lines of code; long before I've
gotten anything which will compile. (Save early, save often, as
the man said.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Sep 23 '07 #3

P: n/a
On Sep 24, 5:56 am, James Kanze <james.ka...@gmail.comwrote:
On Sep 23, 7:16 pm, "Phlip" <phlip...@yahoo.comwrote:
Alexander Dong Back Kim wrote:
I'm doing an experimental thing that the codes I wrote can be compiled
by both VS2005 and gcc. I saw somewhere on the web that I can use
compiler's pre-defined value something life "_cplusplus_" or
"_win32_". My question is is there any way that I can make codes that
can be compiled by the both beautiful compilers? Any comments or
suggestions would be much appreciated.
I would do that by installing CygWin, to get a complete bash environment,
with its GCC.

There are at least three "Unix-like" environments for Windows:
CygWin, UWin and MSys. From experience, I'd avoid CygWin. MSys
is probably the lightest weight, and the best solution for
someone only using it for compatibility issues, UWin gives a
more completely Unix look-and-feel, but doesn't integrate into
Windows as well. And CygWin is just a disaster on all levels.

But I'm not sure that that's the question (and if it was, it
wouldn't really be relevant here). The question is how to write
compatible C++, which will compile with both
compilers---presumably VC++ under Windows, and g++ under Linux
or Unix. The answer, of course, is to write standard C++, and
to avoid the system specific API's, encapsulating them when you
must use them.
Then I'd use UnitTest++ to write lots of unit tests, and I'd
configure some scripts and Makefiles to compile and run all
the tests each time I changed the code.

That's a different problem. I certainly can't imagine anyone
writing makefiles which didn't run the unit tests automatically
before making the component visible to other components. But
how to do this is more a problem of how to write portable
makefiles. It's a problem which isn't really on topic here, but
I might add that I've yet to find a solution, other than
installing the same make (GNU make) everywhere.
I always configure these scripts to trigger automatically when
I save the files in my editor.

You mean you can't save partially edits? I tend to save any
time I've written two or three lines of code; long before I've
gotten anything which will compile. (Save early, save often, as
the man said.)

--
James Kanze (GABI Software) email:james.ka...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Hi James,

That was an excellent answer for me. Thanks. Although I haven't got
the answer that I wanna hear, I realized something that my approach is
meaningless. BTW, thanks for all responding my question =)

Cheers,

Sep 23 '07 #4

P: n/a
James Kanze wrote:
I always configure these scripts to trigger automatically when I save
the files in my editor.
You mean you can't save partially edits? I tend to save any time I've
written two or three lines of code; long before I've gotten anything which
will compile. (Save early, save often, as the man said.)
The goal is saving and testing as often as possible, and correctly
predicting the result of each test run. I expect you could correctly predict
a syntax error diagnostic if that's what's coming...

(Nice long-term slow-burn discussion though... for various definitions of
"nice";)

--
Phlip
Sep 24 '07 #5

P: n/a
James Kanze wrote:
On Sep 23, 7:16 pm, "Phlip" <phlip...@yahoo.comwrote:
>I always configure these scripts to trigger automatically when
I save the files in my editor.

You mean you can't save partially edits?
That all depends on your definition of often...
I tend to save any
time I've written two or three lines of code; long before I've
gotten anything which will compile. (Save early, save often, as
the man said.)
Sound's like a M$ word users mantra :)

--
Ian Collins.
Sep 24 '07 #6

P: n/a
On Sep 24, 3:59 am, "Phlip" <phlip...@yahoo.comwrote:
James Kanze wrote:
I always configure these scripts to trigger automatically when I save
the files in my editor.
You mean you can't save partially edits? I tend to save any time I've
written two or three lines of code; long before I've gotten anything which
will compile. (Save early, save often, as the man said.)
The goal is saving and testing as often as possible, and
correctly predicting the result of each test run. I expect you
could correctly predict a syntax error diagnostic if that's
what's coming...
The thought actually occurred to me after posting that you do
know why your saving; e.g. whether you're saving just to be
sure, or because you've finished some particular micro-step in
the development process. And of course, it's not too difficult
to just ignore the errors in the first case.

It also occured to me that most modern editors take frequent
snapshots, detect crashes, and offer the possibility of
restoring from the last snapshot, so the "save early, save
often" rule might be a bit outdated (but if you've programmed
for any length of time on older systems, it does become an
ingrained habit).

And of course, the two editors I know (vim and emacs) both allow
you to define new commands, so you could also define two "save"
sequences, one which triggers the make, and one which doesn't.
In fact, that's exactly how I work in vim: the command ":make"
triggers the save with recompilation (and my makefiles do invoke
the unit tests, unless I specifically invoke them with a target
that doesn't), whereas ":w" doesn't---as I said above, because
of extensive experience on unreliable systems, I tend to invoke
":w" every ten or fifteen keystrokes.

And not really related to this thread, but... Since you seem to
use unit tests even more intensively than I do, how do you
handle the case where the full unit tests take minutes, or even
hours? I have the case in some of my UTF-8 code, where for
practical reasons related to the internal implementation, I want
to hit at least one character in every block of 64 before
releasing the code for other components to use. On one of the
machines I use, this results in a unit test of several hours,
and of course, in my personal iterations, internal to the
component, I probably don't need to be this thorough: one
character of each length would suffice, with a few added limit
cases. So how do you handle a case like this? Because it's
driving me up the wall.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Sep 24 '07 #7

P: n/a
On Sep 24, 5:09 am, Ian Collins <ian-n...@hotmail.comwrote:
James Kanze wrote:
[...]
I tend to save any
time I've written two or three lines of code; long before I've
gotten anything which will compile. (Save early, save often, as
the man said.)
Sound's like a M$ word users mantra :)
Never used MS Word. But vi or ed, on older Unix (version 7)...
Not to mention the systems I worked on before Unix came around.
And vi under MS-DOS.

It's now become an automatism; every ten or fifteen keystrokes:
":w". Even though vim (and emacs) saves snapshots, which allow
for semi-automatic recovery.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Sep 24 '07 #8

P: n/a
James Kanze wrote:
>
And not really related to this thread, but... Since you seem to
use unit tests even more intensively than I do, how do you
handle the case where the full unit tests take minutes, or even
hours? I have the case in some of my UTF-8 code, where for
practical reasons related to the internal implementation, I want
to hit at least one character in every block of 64 before
releasing the code for other components to use. On one of the
machines I use, this results in a unit test of several hours,
and of course, in my personal iterations, internal to the
component, I probably don't need to be this thorough: one
character of each length would suffice, with a few added limit
cases. So how do you handle a case like this? Because it's
driving me up the wall.
TDD probably wouldn't result in unit tests that of that nature, that
sounds more like a higher level test. TDD tests drive the design, so
they tend to be short logic proving tests, stepping through a sequence
of events.

For example, I've just add an all modules failed flag to a system, to
add this I used the following simple tests:

testHaveAllRegisteredModulesFailedFalseWithNoModul es();
testHaveAllRegisteredModulesFailedFalseWithGoodMod ules();
testHaveAllRegisteredModulesFailedFalseWithOneGood AndOneBadModule();
testHaveAllRegisteredModulesFailedTrueWithAllBadMo dules();

The application has 1100 tests which run in about 50 seconds
(unoptimised) on my machine. If the test time gets above 60 seconds, we
profile and optimise.

The test framework (CppUnit) makes it easy to run tests for one module,
witch we usually do if we are only working in one area.

--
Ian Collins.
Sep 24 '07 #9

P: n/a
James Kanze wrote:
On Sep 24, 11:31 am, Ian Collins <ian-n...@hotmail.comwrote:
>James Kanze wrote:
>>And not really related to this thread, but... Since you seem to
use unit tests even more intensively than I do, how do you
handle the case where the full unit tests take minutes, or even
hours? I have the case in some of my UTF-8 code, where for
practical reasons related to the internal implementation, I want
to hit at least one character in every block of 64 before
releasing the code for other components to use. On one of the
machines I use, this results in a unit test of several hours,
and of course, in my personal iterations, internal to the
component, I probably don't need to be this thorough: one
character of each length would suffice, with a few added limit
cases. So how do you handle a case like this? Because it's
driving me up the wall.
>TDD probably wouldn't result in unit tests that of that nature, that
sounds more like a higher level test.

It's really rather irrelevant what TDD would result in, since I
don't use it. I do use extensive unit tests, and I want them to
be as complete as possible. Testing every Unicode value is
perhaps a little too exhaustive, but the implementation works in
blocks of 64 characters, and testing one in each block isn't
that excessive. For unit tests which are only run when the
component is exported (which shouldn't be that often). But it
still takes too much time for the iterative phase of
development.
Are you testing something that can fail more than once? If you
deliberately break something, how many tests fail?

It's difficult to comment without knowing your problem domain, but from
my experience, TDD tests tend to be more concise than tests added after
the fact.
>TDD tests drive the design, so they tend to be short logic
proving tests, stepping through a sequence of events.

If you're claiming that TDD uses incomplete tests, that doesn't
address my issue. Of course, all tests are incomplete, in one
way or another.
Exactly. No one would claim that TDD tests are the be all and end all
of testing. Anyone using TDD will also be using black box automated
acceptance tests. TDD tests are written to drive the design and to
verify the code still works as designed after each refactor.

--
Ian Collins.
Sep 24 '07 #10

This discussion thread is closed

Replies have been disabled for this discussion.