By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,190 Members | 839 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,190 IT Pros & Developers. It's quick & easy.

Alternatives to using virtuals for cross-platform development

P: n/a
Hi,

I'm interested in developing an application that needs to run on more
than one operating system. Naturally, a lot of the code will be shared
between the various OSs, with OS specific functionality being kept
separate.

I've been going over the various approaches I could follow in order to
implement the OS-specific functionality. The requirements I have are
as follows :
- There is a generic interface that needs to be implemented on each
OS. This is what the client code would call.
- In addition to the common/generic interface, each OS-specific
implementation can expose an OS-specific interface.

The first requirement should be fairly clear. The second one is there
to allow the OS-specific part of one sub-system to use the OS-specific
part of another sub-system (I'm assuming that the two sub-systems know
at design time that they will both be implemented for any given OS).

Probably the most obvious way to go about this is to use an abstract
base class to define the generic interface and this gets subclassed by
concrete, OS-specific implementations. Access to OS-specific
interfaces is then provided by means of a dynamic_ (or even static_?)
cast.

My first worry about this is performance. Some of the functions in the
generic interface will be very small (e.g. int GetID() const ), called
very often, or both.

The other issue that bothers me (and this verges on the border of
being philosophical about it), is that using virtuals somehow doesn't
feel right. It would feel right if I had one interface and multiple
implementations in the same executable/module (i.e. at run-time). In
my case I'm more interested in one interface and multiple 'compile
time' implementations, so the overhead of the virtual mechanism is
kind of wasted.

So, looking at alternatives. Compile-time polymorphism sounds like a
good candidate. You have a base class defining all your functions,
which are implemented using an OS-specific implementation class. Or,
looking at it a different way, you can use this approach to guarantee
that a class implements an interface, but without using virtuals (pure
or otherwise). A quick code snipet to explain this a bit more:

template<class OSImp>
class BaseFoo
{
public: void Bar() { static_cast<OSImp*>(this)->Bar(); }
};

class MyFavOSFoo : public BaseFoo<MyFavOSFoo>
{
public: void Bar() { // do some OS specific stuff }
public: void OSFunc() { // some OS-specific interface }
private: //store some OS specific data
}

Now this is more like what I want, I can have multiple
implementations, without any run-time overhead.

There are a couple of problems however (otherwise I wouldn't be here,
would I? :)

The client code, which is OS-independent, no longer has a polymorphic
base class that it can just use. I somehow need to use MyFavOSFoo
directly. The most obvious solution that comes to mind (but I'm open
to suggestions) is to have the client code use a class called 'Foo'. I
could then have an OS specific header file that has :

typedef MyFavOSFoo Foo;

The problem then is that the selection of implementation/OS boils down
to #including the right header, which seems very fragile.

Then I started thinking of some less 'common' solutions (aka hacks).
Most I've already discarded, but one that seems to have stuck in my
mind is the following :

Have a 'public' header file defining the public interface. Then have
an OS-specific .h and .cpp which implement the interface defined in
the public .h. i.e.

// public.h
class Foo
{
public: void Bar();
// No data members
};

// private.h
class Foo // same name
{
public: void Bar();
private: // Some OS specific data that the client code doesn't need to
know about
};

// private.cpp
void Foo::Bar() { // do some OS specific stuff}

(obviously this needs some sort of Create() function, as the client
code can't call new() or anything like that)

This does pretty much all I want. I can have multiple compile-time
implementations and there is no runtime overhead. I can put private.h
and private.cpp in a static library and let the linker do its magic.

A couple of problems though :
- I would need to make darned sure that the interface in public.h
matches that in private.h (even the declaration order has to match)
- I would need to maintain twice the number of header files (and there
are a lot of them).


Now that I've presented my case, a couple of questions :

1. Am I just being silly? Should I just use virtuals and be done with
it? I know when the performance impact of virtual functions has been
discussed in the past, the advice often is 'profile it and make a
decision based on the results'. The thing is, I'm still at the design
process and hence have nothing to profile. also, I would expect that
there is at least some variation between OSs/compilers. What if it
performs great on 3 out of 4 platforms? I would be locked in with that
design.

2. Am I trying to use C++ in a way that it was not designed to be
used? For example, C# and Java have the concept of a module (or
package or assembly or whatever) that is a monilithic unit that
exposes some types and interfaces. C++'s answer to this would be a
pure virtual class.

3. Is there a 'standard' approach to cross-platform developement that
I've completely missed?
Many thanks,

Bill

Jun 7 '07 #1
Share this Question
Share on Google+
6 Replies


P: n/a
greek_bill wrote:
Hi,

I'm interested in developing an application that needs to run on more
than one operating system. Naturally, a lot of the code will be shared
between the various OSs, with OS specific functionality being kept
separate.
....
template<class OSImp>
class BaseFoo
{
public: void Bar() { static_cast<OSImp*>(this)->Bar(); }
};

class MyFavOSFoo : public BaseFoo<MyFavOSFoo>
{
public: void Bar() { // do some OS specific stuff }
public: void OSFunc() { // some OS-specific interface }
private: //store some OS specific data
}

Now this is more like what I want, I can have multiple
implementations, without any run-time overhead.

There are a couple of problems however (otherwise I wouldn't be here,
would I? :)
....
Many thanks,
Bill
I have used template specializations for this sort of thing (tested):

#include <iostream>

using namespace std;
enum { OS1, OS2 };

// configuration (in a system-wide header someplace)
const int OS = OS2; // or give -D to the compiler from make, or whatever


template<int class OSClass;

template<class OSClass<OS1{
public: void Bar() { cout << "os1.Bar()" << endl; }
public: void OSFunc() { /* some OS1-specific interface */ }
private: //store some OS1 specific data
};

template<class OSClass<OS2{
public: void Bar() { cout << "os2.Bar()" << endl; m_os1.Bar(); }
public: void OSFunc() { /* some OS2-specific interface */ }
private: //store some OS2 specific data
OSClass<OS1m_os1; // use OS1 stuff in secret.
};

// Application code
int main (void)
{
OSClass<OSos;
os.Bar();
}
You could add an abstract base class for both if you want the compiler
to check that both have the proper interface.
Jun 7 '07 #2

P: n/a
Me
On Thu, 07 Jun 2007 14:38:47 -0700, Noone wrote:
>
Now that I've presented my case, a couple of questions :

1. Am I just being silly? Should I just use virtuals and be done with
it? I know when the performance impact of virtual functions has been
discussed in the past, the advice often is 'profile it and make a
decision based on the results'. The thing is, I'm still at the design
process and hence have nothing to profile. also, I would expect that
there is at least some variation between OSs/compilers. What if it
performs great on 3 out of 4 platforms? I would be locked in with that
design.

2. Am I trying to use C++ in a way that it was not designed to be used?
For example, C# and Java have the concept of a module (or package or
assembly or whatever) that is a monilithic unit that exposes some types
and interfaces. C++'s answer to this would be a pure virtual class.

3. Is there a 'standard' approach to cross-platform developement that
I've completely missed?

There is something that rubs me the wrong way about trying to include OS
specific code for multiple OSs in a single executable, and to be honest, I
don't think it will work well, if at all. Different OS means different
support libraries and probably different startup and linkage mechanisms.
Take the
example of windoze vs linux executable. Even a simple dosbox application
that
tries to run natively (intelligently) in linux and determines which OS is
active at runtime is gonna have real problems because IIRC the startup for
the executables is different even if they use the same native CPU.

Stick with separate compile builds for each OS and you'll be much happier
in the long run. KISS: keep it simple, stupid. :^)

I do something similar to what you mention with virtual base classes all
the time but it is from the opposite direction. I have an operator
control unit application for unmanned vehicles. I must support a variety
of human interface devices that have differing physical interfaces,
protocols, etc. I create an inteface base class with virtuals, then
implement the HW specific functions in subclasses and use try/catch blocks
to create the HID object for the first one in which the constructor
doesn't fail. This works extremely well and allows me to add any number
of different human interface devices: keyboards, touchscreens, joysticks,
whatever.

Jun 8 '07 #3

P: n/a
Me wrote:
On Thu, 07 Jun 2007 14:38:47 -0700, Noone wrote:
>Now that I've presented my case, a couple of questions :

1. Am I just being silly? Should I just use virtuals and be done with
it? I know when the performance impact of virtual functions has been
discussed in the past, the advice often is 'profile it and make a
decision based on the results'. The thing is, I'm still at the design
process and hence have nothing to profile. also, I would expect that
there is at least some variation between OSs/compilers. What if it
performs great on 3 out of 4 platforms? I would be locked in with that
design.

2. Am I trying to use C++ in a way that it was not designed to be used?
For example, C# and Java have the concept of a module (or package or
assembly or whatever) that is a monilithic unit that exposes some types
and interfaces. C++'s answer to this would be a pure virtual class.

3. Is there a 'standard' approach to cross-platform developement that
I've completely missed?


There is something that rubs me the wrong way about trying to include OS
specific code for multiple OSs in a single executable, and to be honest, I
don't think it will work well, if at all. Different OS means different
support libraries and probably different startup and linkage mechanisms.
Take the
example of windoze vs linux executable. Even a simple dosbox application
that
tries to run natively (intelligently) in linux and determines which OS is
active at runtime is gonna have real problems because IIRC the startup for
the executables is different even if they use the same native CPU.

Stick with separate compile builds for each OS and you'll be much happier
in the long run. KISS: keep it simple, stupid. :^)

I do something similar to what you mention with virtual base classes all
the time but it is from the opposite direction. I have an operator
control unit application for unmanned vehicles. I must support a variety
of human interface devices that have differing physical interfaces,
protocols, etc. I create an inteface base class with virtuals, then
implement the HW specific functions in subclasses and use try/catch blocks
to create the HID object for the first one in which the constructor
doesn't fail. This works extremely well and allows me to add any number
of different human interface devices: keyboards, touchscreens, joysticks,
whatever.
They are not necessarily in the same executable, only in the same
code-set. And if he is specializing on different versions of the same
OS, and especially if the set of OS-specific facets he's modeling is
small, I don't think having them in the same code-set is a problem. As
an example of handling different version of an OS, and as an extension
to my previous post, this (tested):
#include <iostream>

using namespace std;

enum { OS_A, OS_B_V2, OS_B_V2_1 };

const int OS = OS_B_V2_1;

template<int class OSClass;

// Not used, not present in the executable
template<class OSClass<OS_A{
public: void Bar() { cout << "osA.Bar()" << endl; }
public: void OSFunc() { /* some OS_A-specific interface */ }
private: //store some OS_A specific data
};

template<class OSClass<OS_B_V2{
public: void Bar() { cout << "osBv2.Bar()" << endl; }
public: void OSFunc() { /* some OS_B_V2-specific interface */ }
private: //store some OS_B_V2 specific data
};
template<class OSClass<OS_B_V2_1{
typedef OSClass<OS_B_V2osv2_t;
public:
void Bar() {
m_osv2.Bar();
cout << "osBv2_1.Bar()" << endl;
}
public: void OSFunc() { /* some OS_B_V2.1-specific interface */ }
private: //store some OS_B_V2.1 specific data
osv2_t m_osv2; // Use OS_B version 2 stuff in secret.
};
// Application code
int main (void)
{
OSClass<OSos;
os.Bar();
}
Jun 8 '07 #4

P: n/a
On Jun 7, 11:38 pm, greek_bill <greek_b...@yahoo.comwrote:
I'm interested in developing an application that needs to run on more
than one operating system. Naturally, a lot of the code will be shared
between the various OSs, with OS specific functionality being kept
separate.
I've been going over the various approaches I could follow in order to
implement the OS-specific functionality. The requirements I have are
as follows :
- There is a generic interface that needs to be implemented on each
OS. This is what the client code would call.
- In addition to the common/generic interface, each OS-specific
implementation can expose an OS-specific interface.
The first requirement should be fairly clear. The second one is there
to allow the OS-specific part of one sub-system to use the OS-specific
part of another sub-system (I'm assuming that the two sub-systems know
at design time that they will both be implemented for any given OS).
Probably the most obvious way to go about this is to use an abstract
base class to define the generic interface and this gets subclassed by
concrete, OS-specific implementations. Access to OS-specific
interfaces is then provided by means of a dynamic_ (or even static_?)
cast.
My first worry about this is performance. Some of the functions in the
generic interface will be very small (e.g. int GetID() const ), called
very often, or both.
If the function ends up making a system request, it's doubtable
that the cost of a virtual function call will be measurable. At
any rate, I wouldn't wory about it until I'd actually measured
it, and found it to be a problem.
The other issue that bothers me (and this verges on the border of
being philosophical about it), is that using virtuals somehow doesn't
feel right. It would feel right if I had one interface and multiple
implementations in the same executable/module (i.e. at run-time). In
my case I'm more interested in one interface and multiple 'compile
time' implementations, so the overhead of the virtual mechanism is
kind of wasted.
That's very philosophical. It's not the sort of thing I'd worry
about.

On the other hand...
So, looking at alternatives. Compile-time polymorphism sounds like a
good candidate. You have a base class defining all your functions,
which are implemented using an OS-specific implementation class. Or,
looking at it a different way, you can use this approach to guarantee
that a class implements an interface, but without using virtuals (pure
or otherwise).
I often use link time polymorphism for this. You have a single
class definition for all systems, in a header file, and
different implementations for different systems. You link in
whichever one is appropriate. (If the class requires member
data, you might have to use the compilation firewall idiom to
avoid dependencies in the header.)
A quick code snipet to explain this a bit more:
template<class OSImp>
class BaseFoo
{
public: void Bar() { static_cast<OSImp*>(this)->Bar(); }
};
class MyFavOSFoo : public BaseFoo<MyFavOSFoo>
{
public: void Bar() { // do some OS specific stuff }
public: void OSFunc() { // some OS-specific interface }
private: //store some OS specific data
}
Now this is more like what I want, I can have multiple
implementations, without any run-time overhead.
There are a couple of problems however (otherwise I wouldn't be here,
would I? :)
The main one is the same as for the virtual functions: you
introduce unnecessary complexity for nothing. Philosophically,
how is it unacceptable to have a base class with only one
derived class, but acceptable to have a template with only one
instantiation? (In fact, there are also times when I use this
solution.)
The client code, which is OS-independent, no longer has a polymorphic
base class that it can just use. I somehow need to use MyFavOSFoo
directly. The most obvious solution that comes to mind (but I'm open
to suggestions) is to have the client code use a class called 'Foo'. I
could then have an OS specific header file that has :
typedef MyFavOSFoo Foo;
The problem then is that the selection of implementation/OS boils down
to #including the right header, which seems very fragile.
Why? Give both headers the same name, put them in different
directories, and select which one by means of a -I option
positionned in the header file.

My personal solution here would be, I think, the common class
definition, with separate implementations, and a function
returning a pointer to a forward declared class with the OS
specific parts, i.e.:

class OSSpecific ;

class Unspecific
{
public:
// the usually generic function declarations...
OSSpecific* getOSSpecific() ;
} ;

Obviously, anyone wanting to use the OS specific stuff would
have to include an additional, OS specific header, but then,
he'd only be doing so if he needed functions which were only
available on a specific OS anyway.
Then I started thinking of some less 'common' solutions (aka hacks).
Most I've already discarded, but one that seems to have stuck in my
mind is the following :
Have a 'public' header file defining the public interface. Then have
an OS-specific .h and .cpp which implement the interface defined in
the public .h. i.e.
// public.h
class Foo
{
public: void Bar();
// No data members
};
// private.h
class Foo // same name
{
public: void Bar();
private: // Some OS specific data that the client code doesn't need to
know about
};
That results in undefined behavior, and would likely cause
crashes and such.
// private.cpp
void Foo::Bar() { // do some OS specific stuff}
(obviously this needs some sort of Create() function, as the client
code can't call new() or anything like that)
The client can't include the header, in fact, without incuring
undefined behavior. Just use the compilation firewall idiom in
Foo, and there should be no problem.
This does pretty much all I want. I can have multiple compile-time
implementations and there is no runtime overhead. I can put private.h
and private.cpp in a static library and let the linker do its magic.
A couple of problems though :
- I would need to make darned sure that the interface in public.h
matches that in private.h (even the declaration order has to match)
In fact, the token sequence after preprocessing of the entire
class must match, and all names must bind identically. (There
are a very few exceptions.)

[...]
2. Am I trying to use C++ in a way that it was not designed to be
used? For example, C# and Java have the concept of a module (or
package or assembly or whatever) that is a monilithic unit that
exposes some types and interfaces. C++'s answer to this would be a
pure virtual class.
No. The C++'s answer is that you have a choice of solutions,
and can use whichever one is best for your application. Java
forces you to use one particular solution, whether it is best or
not.
3. Is there a 'standard' approach to cross-platform developement that
I've completely missed?
The most frequent one I've seen is a common header, using the
compilation firewall idiom, and separate implementations. But
I've also seen the abstract base class used.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Jun 8 '07 #5

P: n/a
As numeromancer said, I'm not interested in adding support for
multiple OSs in the same executable (is that even possible?)
It's more along the lines of using OS independent code together with
OS specific code to produce an OS specific executable.

Numeromancer, what you're suggesting is conceptually similar to the
compile-time polymorphism example that I'm descibing above. The
problem, or shortcoming rather, of both approaches is that you don't
have a base-class (or equivalent) for the client code to use.

If we were using virtuals, we'd have

class IFoo
{
// Defines interface
};

class OSFoo : public IFoo
{
// Implements interface
};

Client code only needs to know and use IFoo.

Now, I know I can't quite have that with static polymorphism, which is
why I was saying you might need to do something like :

#if defined(OS1)
typedef BaseFoo<OS1Foo Foo;
#elif defined(OS2)
typedef BaseFoo<OS2Foo Foo;
#endif
Anyway, I'm still looking into this.

Jun 8 '07 #6

P: n/a
On Jun 8, 10:20 pm, James Kanze <james.ka...@gmail.comwrote:
{...}
That results in undefined behavior, and would likely cause
crashes and such.

With regards to have a (partial) public header and a private/
implementation header....surely people must do this all the time when
developing libraries. Take for example a library/SDK that is
distributed as a .h, a .lib and a .dll. Surely the contents of the
header must be just the library's public interface.

As you say, the common parts of the two headers must match exactly
(though I don't think the delcarations need to be in the same order as
I originally said. As long as the mangled names are the same, we
should be fine). In fact, while researching this, I've been keeping an
eye out to see if there a way to ensure that the interface of the
public header is fully and exactly replicated and implemented by the
private code.

I must admit I had forgotten about the pimpl idiom that you mentioned.
My initial instict was 'Yuck, pointer indirection for every data/
member function access?', but maybe that would be a reasonable
compromise. I wonder if there is any way to make the compiler optimise
this away?

Jun 8 '07 #7

This discussion thread is closed

Replies have been disabled for this discussion.