Hi,
I'm interested in developing an application that needs to run on more
than one operating system. Naturally, a lot of the code will be shared
between the various OSs, with OS specific functionality being kept
separate.
I've been going over the various approaches I could follow in order to
implement the OS-specific functionality. The requirements I have are
as follows :
- There is a generic interface that needs to be implemented on each
OS. This is what the client code would call.
- In addition to the common/generic interface, each OS-specific
implementation can expose an OS-specific interface.
The first requirement should be fairly clear. The second one is there
to allow the OS-specific part of one sub-system to use the OS-specific
part of another sub-system (I'm assuming that the two sub-systems know
at design time that they will both be implemented for any given OS).
Probably the most obvious way to go about this is to use an abstract
base class to define the generic interface and this gets subclassed by
concrete, OS-specific implementations. Access to OS-specific
interfaces is then provided by means of a dynamic_ (or even static_?)
cast.
My first worry about this is performance. Some of the functions in the
generic interface will be very small (e.g. int GetID() const ), called
very often, or both.
The other issue that bothers me (and this verges on the border of
being philosophical about it), is that using virtuals somehow doesn't
feel right. It would feel right if I had one interface and multiple
implementations in the same executable/module (i.e. at run-time). In
my case I'm more interested in one interface and multiple 'compile
time' implementations, so the overhead of the virtual mechanism is
kind of wasted.
So, looking at alternatives. Compile-time polymorphism sounds like a
good candidate. You have a base class defining all your functions,
which are implemented using an OS-specific implementation class. Or,
looking at it a different way, you can use this approach to guarantee
that a class implements an interface, but without using virtuals (pure
or otherwise). A quick code snipet to explain this a bit more:
template<class OSImp>
class BaseFoo
{
public: void Bar() { static_cast<OSImp*>(this)->Bar(); }
};
class MyFavOSFoo : public BaseFoo<MyFavOSFoo>
{
public: void Bar() { // do some OS specific stuff }
public: void OSFunc() { // some OS-specific interface }
private: //store some OS specific data
}
Now this is more like what I want, I can have multiple
implementations, without any run-time overhead.
There are a couple of problems however (otherwise I wouldn't be here,
would I? :)
The client code, which is OS-independent, no longer has a polymorphic
base class that it can just use. I somehow need to use MyFavOSFoo
directly. The most obvious solution that comes to mind (but I'm open
to suggestions) is to have the client code use a class called 'Foo'. I
could then have an OS specific header file that has :
typedef MyFavOSFoo Foo;
The problem then is that the selection of implementation/OS boils down
to #including the right header, which seems very fragile.
Then I started thinking of some less 'common' solutions (aka hacks).
Most I've already discarded, but one that seems to have stuck in my
mind is the following :
Have a 'public' header file defining the public interface. Then have
an OS-specific .h and .cpp which implement the interface defined in
the public .h. i.e.
// public.h
class Foo
{
public: void Bar();
// No data members
};
// private.h
class Foo // same name
{
public: void Bar();
private: // Some OS specific data that the client code doesn't need to
know about
};
// private.cpp
void Foo::Bar() { // do some OS specific stuff}
(obviously this needs some sort of Create() function, as the client
code can't call new() or anything like that)
This does pretty much all I want. I can have multiple compile-time
implementations and there is no runtime overhead. I can put private.h
and private.cpp in a static library and let the linker do its magic.
A couple of problems though :
- I would need to make darned sure that the interface in public.h
matches that in private.h (even the declaration order has to match)
- I would need to maintain twice the number of header files (and there
are a lot of them).
Now that I've presented my case, a couple of questions :
1. Am I just being silly? Should I just use virtuals and be done with
it? I know when the performance impact of virtual functions has been
discussed in the past, the advice often is 'profile it and make a
decision based on the results'. The thing is, I'm still at the design
process and hence have nothing to profile. also, I would expect that
there is at least some variation between OSs/compilers. What if it
performs great on 3 out of 4 platforms? I would be locked in with that
design.
2. Am I trying to use C++ in a way that it was not designed to be
used? For example, C# and Java have the concept of a module (or
package or assembly or whatever) that is a monilithic unit that
exposes some types and interfaces. C++'s answer to this would be a
pure virtual class.
3. Is there a 'standard' approach to cross-platform developement that
I've completely missed?
Many thanks,
Bill