By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,856 Members | 2,015 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,856 IT Pros & Developers. It's quick & easy.

Best C++ compiler for DOS programs

P: n/a
I have to develop several large and complex C++ hardware test programs that
should work under DOS, most likely with 32-bit DOS extender. Development
workstation OS would be Microsoft XP. Quite some time ago I worked in DOS,
with Borland BC++ 4.1. I do not have it any more. Which compiler would you
recommend me now? Which ones support serious DOS program development?
Criterion should be number of available free library modules (graphic menu
system, mouse driver, I/O), ease of development, price (maybe free?),
current and future support. If compiled program can work in DOS window under
XP, at least for some early testing, that would be fine also.

So far I have found free Watcom, Digital Mars and DJGPP compilers

http://www.digitalmars.com/
http://www.openwatcom.org/index.php/Main_Page
http://www.delorie.com/djgpp/

Which one of these, or other free compilers, is best? What about commercial
compilers? Are they worth the money for DOS development? What would you
recommend me?

Steve.

Apr 22 '06 #1
Share this Question
Share on Google+
55 Replies


P: n/a
> Which one of these, or other free compilers, is best? What about commercial
compilers? Are they worth the money for DOS development? What would you
recommend me?

Steve.


I am not sure but I think this group doesn't, and shouldn't, recommend
compilers.

You should try them all if you can, and only you knows exactly what your
criteria are.

Regards,
Ben
Apr 22 '06 #2

P: n/a
Steve wrote:
http://www.delorie.com/djgpp/
Which one of these, or other free compilers, is best? What about commercial
compilers? Are they worth the money for DOS development? What would you
recommend me?


I used DJGPP (with IDE - RHIDE afair), it was nice.
--
Wymiana starych układów... na nowe układy - prawie jak walka z korupcja.
Walka z wychowaniem seksualnym i erotyką - prawie jak walka z patologią.
PiS - prawie jak prawo i sprawiedliwość... Prawie. Prawie robi różnicę.
Myśl. Głosuj rozsądnie. Nie na tanie hasła. // Rafał Maj Raf256
Apr 22 '06 #3

P: n/a

"Steve" <St**@nospam.com> wrote in message
news:e2**********@news.eunet.yu...
I have to develop several large and complex C++ should work under DOS, most likely with 32-bit DOS extender. workstation OS would be Microsoft XP Which ones support serious DOS program development?
current and future support. So far I have found free Watcom, Digital Mars and DJGPP compilers

http://www.digitalmars.com/
http://www.openwatcom.org/index.php/Main_Page
http://www.delorie.com/djgpp/

I've used DJGPP v2.03 and OW 1.3 for personal programs in C (not C++) for
DOS. DJGPP only supports DOS. DOS support in OW is strong (almost
Microsoft clone) but seems to be almost inactive. The main support team and
contributors seem more concerned with OS/2, WxWidgets, STL, and lately
FreeBSD and Linux, etc... DJGPP has the 'feel' of GCC and has some POSIX
support but doesn't used GLIBC. DJGPP generates better warnings than OW,
but OW compiles much faster and generates _much_ faster code. OpenWatcom as
of 1.3 (they are upto 1.5 now) has no POSIX support and IMO was a bit rough
when compared to the completeness of DJGPP. However, I've read that some of
the OW1.3 problems have supposedly been fixed, like adding long filenames
for DOS.

As for XP, I recall seeing some problems with XP for DJGPP. I don't know
whether they have been resolved or not. It's hard to get information from
Delorie on the future of or direction of DJGPP. But, it seems that he is or
was working on XP support. I tried to get him interested in using/porting
GLIBC, but he declared it to be out of the question. The next version of
DJGPP, Beta v2.04, has been Beta for 3 to 4 years...
You could also look at these DOS compilers:

DiceRTE
http://www.diefer.de/dicerte/

David Lindauer's CC386
http://members.tripod.com/~ladsoft/cc386.htm

DiceRTE hasn't released all the sources (i.e., main compiler DCC32 and his
DPMI host which he calls the "kernel") and I haven't seen any updates. Most
of the documentation is in German. CC386, a few years ago, had some errors
in the libraries. These have probably been fixed by now. It seems Lindauer
is still updating the compiler.

If you don't need DOS support, there are many other compilers for Windows:
Cygwin, Mingw, LCC-Win32, Pelles C, TenDRA(Oops, that's Linux...) If you
need support for multiple environments, OW is one of the few that support
many.
Rod Pemberton
Apr 22 '06 #4

P: n/a
I'm keeping cross-posting of the message I am replying to. Not the best
practice, but I think it's appropriate here. I read it in comp.lang.c++.

benben wrote:
Which one of these, or other free compilers, is best? What about
commercial compilers? Are they worth the money for DOS development?
What would you recommend me?

Steve.
I am not sure but I think this group doesn't, and shouldn't, recommend
compilers.


Not sure which group you mean, you have cross-posted your reply. And
why not recommend a compiler? I mean, I can always recommend VC++ 7.1
over VC++ 7.0, or over Turbo C++ v3, and there are reasons for that
(not for DOS, mind you, VC++ after 1.52 doesn't do DOS, IIRC). I can
recommend Intel C++ v4.5 over VC++ v6, and there are reasons for that.
There is no "best" compiler, since they are usually part of some
product, and you cannot get a bare compiler from the vendor anyway.
You should try them all if you can, and only you knows exactly what
your criteria are.


That is a sound advice, or, rather, it would be, if it were possible to
follow it. How can you "try them all", if often just to try you have to
fork over some amount of money or spend significant time fixing your code
that doesn't compile straight up? That's why opinions of those who have
tried at least two out of all existing compilers, and can compare those
two, can be very valuable.

V
--
Please remove capital As from my address when replying by mail
Apr 22 '06 #5

P: n/a
In article <e2**********@news.eunet.yu>, Steve <St**@nospam.com> writes
I have to develop several large and complex C++ hardware test programs that
should work under DOS, most likely with 32-bit DOS extender. Development
workstation OS would be Microsoft XP. Quite some time ago I worked in DOS,
with Borland BC++ 4.1. I do not have it any more. Which compiler would you
recommend me now? Which ones support serious DOS program development?
Criterion should be number of available free library modules (graphic menu
system, mouse driver, I/O), ease of development, price (maybe free?),
current and future support. If compiled program can work in DOS window under
XP, at least for some early testing, that would be fine also.

So far I have found free Watcom, Digital Mars and DJGPP compilers

http://www.digitalmars.com/
http://www.openwatcom.org/index.php/Main_Page
http://www.delorie.com/djgpp/

Which one of these, or other free compilers, is best? What about commercial
compilers? Are they worth the money for DOS development? What would you
recommend me?

Steve.


AFAIK the Borland compilers may now be downloaded FREE and freely. I
would start at the Borland web site. I have a copy of BC++ V4.1
--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Apr 22 '06 #6

P: n/a
Which ones support serious DOS program development?
Criterion should be number of available free library modules (graphic menu
system, mouse driver, I/O), ease of development, price (maybe free?),
current and future support. If compiled program can work in DOS window under
XP, at least for some early testing, that would be fine also.


I would use DJGPP. It is free, you have good support, most bug free and
there are many libraries for it...

bye,

Flo
Apr 22 '06 #7

P: n/a
Rafał Maj Raf256 wrote:
Steve wrote:

http://www.delorie.com/djgpp/
Which one of these, or other free compilers, is best? What about commercial
compilers? Are they worth the money for DOS development? What would you
recommend me?

I used DJGPP (with IDE - RHIDE afair), it was nice.

Watford compiler,now free, has worked well for me.
For 16 as well as 32 bit under DOS.
Apr 22 '06 #8

P: n/a
Steve wrote:
I have to develop several large and complex C++ hardware test programs that
should work under DOS, most likely with 32-bit DOS extender. Development
workstation OS would be Microsoft XP. Quite some time ago I worked in DOS,
with Borland BC++ 4.1. I do not have it any more. Which compiler would you
recommend me now? Which ones support serious DOS program development?
Criterion should be number of available free library modules (graphic menu
system, mouse driver, I/O), ease of development, price (maybe free?),
current and future support. If compiled program can work in DOS window
under
XP, at least for some early testing, that would be fine also.

So far I have found free Watcom, Digital Mars and DJGPP compilers

http://www.digitalmars.com/
http://www.openwatcom.org/index.php/Main_Page
http://www.delorie.com/djgpp/

Which one of these, or other free compilers, is best? What about commercial
compilers? Are they worth the money for DOS development? What would you
recommend me?

Steve.

10 years or so ago, I used Zortech (now it should be digital mars) and
Watcom (10, 11) for 32-bit DOS development. WATCOM, at the time, was my
platform of choice for 32-bit DOS (more extenders, more powerful
although enigmatic, IDE and compiler options, better C optimization (not
C++ though)). 2 catches:
1. it was not very close to standard C++, not sure if it is now.
2. It came with a choice of 2 or 3 commercial (as the compiler itself)
"Dos Extenders" included. I am not sure they are included with the
openwatcom for free -- you have to check it out yourself. Same comments
for Zortech (Symantec - digital mars).

The only other one I knew that was widely used in production, was
Metaware's Hich C. Never used myself and do not know if it is still around.

Hope this will help,
Pavel

Apr 23 '06 #9

P: n/a
Thank you all for your answers.

It seems I should take a look at Watcom compiler first. They have many
target platforms and still seem very active. I assume that no latest
Microsoft or Borland C++ compilers support DOS program development. Now I
need to find some good graphical (not textual) windowing menu system library
that would, hopefully, work with Watcom. If you have some suggestion about
it, please, I would like to hear it.

Still, it was frustrating. I don't understand almost complete abandonment of
DOS program development in commercial compiler world. DOS is still, and will
be for the long time, the only platform for all programs that must have
exclusive access to hardware, or that must work alone for other reasons.
Real time applications and accurate timing is only possible in DOS. It is
ideal for hardware testing, as the controller platform, or PC hardware
malfunction detection, analysis and repair. If any important PC component is
malfunctioning, you better not try to boot other operating system because
hard disk data integrity may be compromised. Again, you must boot DOS and
run some hardware analysis program. But it is becoming harder and harder to
write programs for DOS!

Anyway, are you aware of some specialized news conference or blogs devoted
to writing various PC hardware test programs?

Steve.

Apr 23 '06 #10

P: n/a

"Steve" <St**@nospam.com> wrote in message
news:e2**********@news.eunet.yu...
Thank you all for your answers.

. Now I
need to find some good graphical (not textual) windowing menu system library that would, hopefully, work with Watcom. If you have some suggestion about
it, please, I would like to hear it.

The DJGPP crowd prefers Allegro, but I've had many problems getting Allegro
applications to work with the video cards I've used. The one I've been
looking at using is DEPUI 3.0. It works well, but it needs a small amount
of simple DJGPP DPMI code ported to OpenWatcom. I'm able to and was
planning on porting it, but just haven't gotten to it and am not sure when I
will... DEPUI seems to work well with all the video cards of mine that have
problems with Allegro.

http://www.deleveld.dds.nl/depui30/index.htm
Still, it was frustrating. I don't understand almost complete abandonment of DOS program development in commercial compiler world.


No 32-bit DOS.

Are you sure that you _really_ need DOS to do your hardware testing? I
would think that there are well established mechanisms for Windows and Linux
to do everything that DOS can. If the DPMI host is doing alot of switching
from PM to RM and back, results can be incorrect under DOS too... Memory
testing using physically addressed memory is about the thing that comes to
mind which should be difficult outside of DOS.
Rod Pemberton
Apr 23 '06 #11

P: n/a
Steve wrote:
I have to develop several large and complex C++ hardware test programs that
should work under DOS, most likely with 32-bit DOS extender. Development
workstation OS would be Microsoft XP. Quite some time ago I worked in DOS,
with Borland BC++ 4.1. I do not have it any more. Which compiler would you
recommend me now? Which ones support serious DOS program development?
Criterion should be number of available free library modules (graphic menu
system, mouse driver, I/O), ease of development, price (maybe free?),
current and future support. If compiled program can work in DOS window under
XP, at least for some early testing, that would be fine also.

So far I have found free Watcom, Digital Mars and DJGPP compilers

http://www.digitalmars.com/
http://www.openwatcom.org/index.php/Main_Page
http://www.delorie.com/djgpp/

Which one of these, or other free compilers, is best? What about commercial
compilers? Are they worth the money for DOS development? What would you
recommend me?
Steve wrote: I don't understand almost complete abandonment of
DOS program development in commercial compiler world.


The answer is pretty simple. Commercial product developers develop for
customers who are willing to pay for products, and that dried up for DOS
long ago. I remember the same lament around 1985 or so when people were
complaining that commercial compiler companies had abandoned CP/M, which
they abandoned for the exact same reason.

-Walter Bright
Digital Mars
Apr 23 '06 #12

P: n/a
Steve wrote:
Thank you all for your answers.

It seems I should take a look at Watcom compiler first. They have many
target platforms and still seem very active. I assume that no latest
Microsoft or Borland C++ compilers support DOS program development. Now I
need to find some good graphical (not textual) windowing menu system
library
that would, hopefully, work with Watcom. If you have some suggestion about
it, please, I would like to hear it.

Library I am using:
ZSVGA by Zephyr Software. ... ZSVGA v1.01 (ZSVGA101.ZIP) is a 32 bit
protected mode SVGA graphics library for the Watcom & Symantec C/C++
compilers. ...
Apr 23 '06 #13

P: n/a
> ZSVGA by Zephyr Software. ... ZSVGA v1.01 (ZSVGA101.ZIP) is a 32 bit
protected mode SVGA graphics library for the Watcom & Symantec C/C++
compilers. ...


It seems to be a commercial product - it is under development anymore?

Bye
Flo
Apr 23 '06 #14

P: n/a
> Are you sure that you _really_ need DOS to do your hardware testing? I
would think that there are well established mechanisms for Windows and Linux
to do everything that DOS can. If the DPMI host is doing alot of switching
from PM to RM and back, results can be incorrect under DOS too... Memory
testing using physically addressed memory is about the thing that comes to
mind which should be difficult outside of DOS.


Please, don't post this words on this list - it is a DOS list... ;-)

Bye
Flo
Apr 23 '06 #15

P: n/a
Walter Bright wrote:
Steve wrote:

I don't understand almost complete abandonment of
DOS program development in commercial compiler world.


The answer is pretty simple. Commercial product developers develop for
customers who are willing to pay for products, and that dried up for DOS
long ago. I remember the same lament around 1985 or so when people were
complaining that commercial compiler companies had abandoned CP/M, which
they abandoned for the exact same reason.


OK, but they abandoned CP/M for another OS of the same kind (actually worse
one). Approximately same functionality could be found on replacemet OS. With
DOS, now, we have platform that is close to hardware. There will always be
need for that, and that is the reason why it must be kept alive. Trying to
pass though several layers of code to reach hardware, and disable parts of
OS in the process, does not seem like a good approach for
test/diagnostic/repair/low level benchmark utilities.

Steve

Apr 23 '06 #16

P: n/a
Rod Pemberton wrote:
The DJGPP crowd prefers Allegro, but I've had many problems getting
Allegro applications to work with the video cards I've used. The one
I've been looking at using is DEPUI 3.0. It works well, but it needs a
small amount of simple DJGPP DPMI code ported to OpenWatcom. I'm able to
and was planning on porting it, but just haven't gotten to it and am not
sure when I will... DEPUI seems to work well with all the video cards of
mine that have problems with Allegro.
http://www.deleveld.dds.nl/depui30/index.htm


It looks very good. Could you give me some info, which part DEPUI needs
fixing? If it's not too complex, maybe I will be able to do it.
Still, it was frustrating. I don't understand almost complete
abandonment of DOS program development in commercial compiler world.


No 32-bit DOS.

Are you sure that you _really_ need DOS to do your hardware testing? I
would think that there are well established mechanisms for Windows and
Linux to do everything that DOS can. If the DPMI host is doing alot of
switching from PM to RM and back, results can be incorrect under DOS
too... Memory testing using physically addressed memory is about the
thing that comes to mind which should be difficult outside of DOS.


At least context switching is predictable. In preemptive multitasking you
can never be sure when you will be interrupted, by which task, what it will
do, and for how long. One another example is corrupted hard disk. Some power
glitch or bad sector can alter or damage file system system area, or
important OS system files. OS usually can't be loaded. Or, if it can, you
will further corrupt hard disk data. First rule is that you must not write
anything to that disk. But no modern OS can be loaded without hard disk
writing. Again, we need bootable medium with some rudimentary OS, like DOS,
for various repair and backup utilities. In fact, when PC becomes unstable
for whatever unknown reason, it is better to avoid diagnostic hardware
experiments (replacing components) with booting real OS. It is quite
possible that you will have to reinstall it again.

I would rather avoid DOS extenders and protected mode but I don't think it
will be possible. Just the amount of data can be enough reason (for example
scrollable graphic diagrams), and I would like to avoid the need for
temporary hard disk storage. And in protected mode DOS I usually have
something like 128MB of memory just waiting for me :)

Steve

Apr 23 '06 #17

P: n/a
Steve wrote:
OK, but they abandoned CP/M for another OS of the same kind (actually worse
one). Approximately same functionality could be found on replacemet OS.
With
DOS, now, we have platform that is close to hardware. There will always be
need for that, and that is the reason why it must be kept alive. Trying to
pass though several layers of code to reach hardware, and disable parts of
OS in the process, does not seem like a good approach for
test/diagnostic/repair/low level benchmark utilities.


You mentioned "free" four times in your request for DOS development
tools. While there are a number of free DOS development tools, many of
them very good, I can tell you for a fact that there is little interest
out there in paying for DOS development tools, and that means there
isn't going to be interest from commercial tool developers in supporting it.
Apr 23 '06 #18

P: n/a

"Steve" <St**@nospam.com> wrote in message
news:e2**********@news.eunet.yu...
Rod Pemberton wrote:
The DJGPP crowd prefers Allegro, but I've had many problems getting
Allegro applications to work with the video cards I've used. The one
I've been looking at using is DEPUI 3.0. It works well, but it needs a
small amount of simple DJGPP DPMI code ported to OpenWatcom. I'm able to and was planning on porting it, but just haven't gotten to it and am not
sure when I will... DEPUI seems to work well with all the video cards of mine that have problems with Allegro.
http://www.deleveld.dds.nl/depui30/index.htm


It looks very good. Could you give me some info, which part DEPUI needs
fixing? If it's not too complex, maybe I will be able to do it.


If you use DEGFX instead of Allegro, under the DEGFX directories there is a
DJGPP directory. There are four files. Three are small. I think, but am
not sure, that these are the only files that need ported. It appears to me
that these are mostly DPMI calls or below 1Mb memory accesses (farpeek's
etc..). It's fairly straightforward but time consuming to port these. I
also see some packed structs and use of DJGPP transfer buffer.
DJGPP packed structs would need to be rewritten:

typedef struct VESA_INFO {
unsigned char VESASignature[4] __attribute__ ((packed));
/* snipped middle of struct */
unsigned char OemData[256] __attribute__ ((packed));
} VESA_INFO;

Rewritten like so (you probably don't need the __DJGPP__ section):

#ifdef __DJGPP__
#define HANDLE_PRAGMA_PACK_PUSH_POP 1
#endif

#pragma pack(push,1)
typedef struct VESA_INFO {
unsigned char VESASignature[4];
/* snipped middle of struct */
unsigned char OemData[256];
} VESA_INFO;
#pragma pack(pop)
The DJGPP transfer buffer can be setup for PM Watcom, get_dos_mem() is
called to setup __tb, and free_dos_mem() when done:

#ifdef __WATCOMC__
#ifdef __386__
void *tb;
unsigned short sel; /* sel needed to free allocated memory, don't use */

/* setup tb - transfer buffer for dos calls in memory below 1Mb */
void free_dos_mem(void)
{
r.w.ax=0x0101; /* free dos memory */
r.w.dx=sel;
int386(0x31,&r,&r);
}

void get_dos_mem(void)
{
r.w.ax=0x0100; /* allocate dos memory, no __tb in Watcom */
r.w.bx=0x400; /* 04000h (16384 bytes) in paragraphs (16 bytes) */
int386(0x31,&r,&r);
sel = r.w.dx;
tb = (void *)(r.w.ax<<4);
}
#endif
#endif

Rod Pemberton
Apr 23 '06 #19

P: n/a
On Sun, 23 Apr 2006 15:07:22 -0700 Walter Bright
<wa****@digitalmars-nospamm.com> waved a wand and this message
magically appeared:
You mentioned "free" four times in your request for DOS development
tools. While there are a number of free DOS development tools, many
of them very good, I can tell you for a fact that there is little
interest out there in paying for DOS development tools, and that
means there isn't going to be interest from commercial tool
developers in supporting it.


I've just been testing your Digital Mars C++ compiler and
OpenWatcom 1.4 under Windows 98, targetting MSDOS 6.22 platforms. Very,
very, nice. But I'd really like to be able to write STL code that can
run under DOS under the large memory model. Is that even do-able? One
of my STL projects only takes up 300k when compiled as a 32 bit windows
console application. OpenWatcom 1.4 barfs on most of my STL code, but I
suppose 1.5 will be much better in that respect.

Oh, and by the way, I've managed to compile 16 bit Windows programs
using just the *downloaded* Digital Mars compiler toolchain plus the
16bit DOS development package. All I did was to copy over Win16.h from
OpenWatcom 1.4 into the \h\win16 include directory and renamed it
windows.h. Mind you, I can't link at all!

--
http://www.munted.org.uk

Take a nap, it saves lives.
Apr 24 '06 #20

P: n/a
Alex Buell wrote:
I've just been testing your Digital Mars C++ compiler and
OpenWatcom 1.4 under Windows 98, targetting MSDOS 6.22 platforms. Very,
very, nice. But I'd really like to be able to write STL code that can
run under DOS under the large memory model. Is that even do-able? One
of my STL projects only takes up 300k when compiled as a 32 bit windows
console application. OpenWatcom 1.4 barfs on most of my STL code, but I
suppose 1.5 will be much better in that respect.
STL isn't very usable for 16 bit code, it's just too big. Another
problem is it has no accommodation for near/far. Although DMC++ does
implement exception handling for 16 bit DOS, even that is more of a
technological feat than a practical one - due to size constraints, I'd
recommend using the older error code technique instead.

Exception handling, STL, etc., are much more practical in a 32 bit system.
Oh, and by the way, I've managed to compile 16 bit Windows programs
using just the *downloaded* Digital Mars compiler toolchain plus the
16bit DOS development package. All I did was to copy over Win16.h from
OpenWatcom 1.4 into the \h\win16 include directory and renamed it
windows.h. Mind you, I can't link at all!


Getting the CD does get you the Windows 16 libraries!

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
Apr 24 '06 #21

P: n/a
Florian Xaver wrote:
ZSVGA by Zephyr Software. ... ZSVGA v1.01 (ZSVGA101.ZIP) is a 32 bit
protected mode SVGA graphics library for the Watcom & Symantec C/C++
compilers. ...

It seems to be a commercial product - it is under development anymore?

Bye
Flo

Shareware it was last time I looked.
They also have a 16 bit version (svga.zip)
Apr 24 '06 #22

P: n/a
On Sun, 23 Apr 2006 17:49:32 -0700 Walter Bright
<wa****@digitalmars-nospamm.com> waved a wand and this message
magically appeared:
STL isn't very usable for 16 bit code, it's just too big. Another
problem is it has no accommodation for near/far. Although DMC++ does
implement exception handling for 16 bit DOS, even that is more of a
technological feat than a practical one - due to size constraints,
I'd recommend using the older error code technique instead.

Exception handling, STL, etc., are much more practical in a 32 bit
system.


I guess that's the price of progress. BTW, I bought a copy of Zortech C
decades ago and it was worth every penny I spent on it. Shame I don't
have the orignal disks nor the software any more, or you'd have got
them. Borland have put up a historial library of their Pascal & C
compilers. Hopefully you could do the same for Zortech C? ;)
Oh, and by the way, I've managed to compile 16 bit Windows programs
using just the *downloaded* Digital Mars compiler toolchain plus the
16bit DOS development package. All I did was to copy over Win16.h
from OpenWatcom 1.4 into the \h\win16 include directory and renamed
it windows.h. Mind you, I can't link at all!


Getting the CD does get you the Windows 16 libraries!


Yes that's right, thanks.

--
http://www.munted.org.uk

Take a nap, it saves lives.
Apr 24 '06 #23

P: n/a
Alex Buell wrote:
I guess that's the price of progress. BTW, I bought a copy of Zortech C
decades ago and it was worth every penny I spent on it. Shame I don't
have the orignal disks nor the software any more, or you'd have got
them. Borland have put up a historial library of their Pascal & C
compilers. Hopefully you could do the same for Zortech C? ;)


I've wanted to, but I needed the agreement of a couple of people who I
have been unable to locate, and one who has sadly passed away.
Apr 24 '06 #24

P: n/a
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:N8******************************@comcast.com. ..
Alex Buell wrote:
I've just been testing your Digital Mars C++ compiler and
OpenWatcom 1.4 under Windows 98, targetting MSDOS 6.22 platforms. Very,
very, nice. But I'd really like to be able to write STL code that can
run under DOS under the large memory model. Is that even do-able? One
of my STL projects only takes up 300k when compiled as a 32 bit windows
console application. OpenWatcom 1.4 barfs on most of my STL code, but I
suppose 1.5 will be much better in that respect.

We use mingw with our libraries as a convenient test bed. It offers
a free, reasonably current gcc that lets us compile large-model programs
under DOS.
STL isn't very usable for 16 bit code, it's just too big.
News to me, and to many of our embedded customers. STL itself is
weightless. How big the memory footprint is depends on just those
functions you choose to use.
Another problem
is it has no accommodation for near/far.
Also news to me. We've preserved the near/far notation that H-P put
in the earliest allocators. Not used much, AFAIK, but it's there.
Although DMC++ does
implement exception handling for 16 bit DOS, even that is more of a
technological feat than a practical one - due to size constraints, I'd
recommend using the older error code technique instead.
We provide a simplified mechanism for those who choose to compile
with execptions disabled, but once again it's a matter of taste.
And particular needs.
Exception handling, STL, etc., are much more practical in a 32 bit system.


More precisely, *large programs* are much more practical in 32-bit
systems. Neither exception handling, nor STL, nor etc. are intrinsically
too big to be of use in some programs for 16-bit processors.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Apr 24 '06 #25

P: n/a
P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:N8******************************@comcast.com. ..
Alex Buell wrote:
I've just been testing your Digital Mars C++ compiler and
OpenWatcom 1.4 under Windows 98, targetting MSDOS 6.22 platforms. Very,
very, nice. But I'd really like to be able to write STL code that can
run under DOS under the large memory model. Is that even do-able? One
of my STL projects only takes up 300k when compiled as a 32 bit windows
console application. OpenWatcom 1.4 barfs on most of my STL code, but I
suppose 1.5 will be much better in that respect.
We use mingw with our libraries as a convenient test bed. It offers
a free, reasonably current gcc that lets us compile large-model programs
under DOS.


I thought mingw only supported 32 bit code under DOS. I just checked the
website for it, and it only mentions 32 bit code. "large-model" programs
under DOS are 16 bit programs.

STL isn't very usable for 16 bit code, it's just too big.


News to me, and to many of our embedded customers. STL itself is
weightless. How big the memory footprint is depends on just those
functions you choose to use.


I'm curious what C++ compiler you're using to generate 16 bit code with STL.
Another problem
is it has no accommodation for near/far.

Also news to me. We've preserved the near/far notation that H-P put
in the earliest allocators. Not used much, AFAIK, but it's there.


Having it in there doesn't mean it works very well. Effective large
model programs need careful management of which segments each function
goes in to, when things can be near, when things can be referred to by
__ss pointers, etc. Templates aren't conducive to this, and neither is
just throwing in a near allocator.
Although DMC++ does
implement exception handling for 16 bit DOS, even that is more of a
technological feat than a practical one - due to size constraints, I'd
recommend using the older error code technique instead.


We provide a simplified mechanism for those who choose to compile
with execptions disabled, but once again it's a matter of taste.
And particular needs.


I'm curious what 16 bit C++ compiler you're using that supports
exception handling.

Exception handling, STL, etc., are much more practical in a 32 bit system.

More precisely, *large programs* are much more practical in 32-bit
systems. Neither exception handling, nor STL, nor etc. are intrinsically
too big to be of use in some programs for 16-bit processors.


A large part of the effort in developing 16 bit programs was always
spent trying to squeeze the size down. Exception handling adds a big
chunk of size, which will just make it that much harder, and so will
actually reduce the complexity of a program you can build for 16 bits.
STL adds another chunk of size, if only because it doesn't allow tuning
of near/far. I'm not as convinced of the lightweightness of STL as you
are, and iostreams in particular seems to add a huge amount of code even
for simple things. Using C stdio for 16 bit programs is best because
many years were spent optimizing it to get the size down (some vendors
even implemented printf entirely in assembler!), and such effort was
never expended on iostreams.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
Apr 24 '06 #26

P: n/a
P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
STL isn't very usable for 16 bit code, it's just too big.

News to me, and to many of our embedded customers. STL itself is
weightless. How big the memory footprint is depends on just those
functions you choose to use.


One of the problems templates have (and STL is thoroughly based on
templates) is that it can go too far with customization, thereby
generating bloat. For example, I use a (non-template) linked list
package that creates a list of 'int' items. I can use it to store lists
of unsigned, shorts, unsigned shorts, char types, near pointers, etc.,
without adding any code. But if it was templatetized, a separate
implementation would be generated for each type.

This isn't a problem for 32 bit code generation, where there's lots of
room for the extra code. But it *is* a problem for 16 bit code, where
your code and data have to fit in 640Kb.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
Apr 24 '06 #27

P: n/a
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:jZ********************@comcast.com...
P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:N8******************************@comcast.com. ..
Alex Buell wrote:
I've just been testing your Digital Mars C++ compiler and
OpenWatcom 1.4 under Windows 98, targetting MSDOS 6.22 platforms. Very,
very, nice. But I'd really like to be able to write STL code that can
run under DOS under the large memory model. Is that even do-able? One
of my STL projects only takes up 300k when compiled as a 32 bit windows
console application. OpenWatcom 1.4 barfs on most of my STL code, but I
suppose 1.5 will be much better in that respect.
We use mingw with our libraries as a convenient test bed. It offers
a free, reasonably current gcc that lets us compile large-model programs
under DOS.


I thought mingw only supported 32 bit code under DOS. I just checked the
website for it, and it only mentions 32 bit code. "large-model" programs
under DOS are 16 bit programs.


Sorry, I blurred the distinction here.
STL isn't very usable for 16 bit code, it's just too big.


News to me, and to many of our embedded customers. STL itself is
weightless. How big the memory footprint is depends on just those
functions you choose to use.


I'm curious what C++ compiler you're using to generate 16 bit code with
STL.


We have a number of embedded OEMs who ship our libraries for both 16-bit
and 32-bit targets. I'm sure that C++ is more popular on the larger ones,
but I know it's not completely absent on the smaller ones.
Another problem is it has no accommodation for near/far.

Also news to me. We've preserved the near/far notation that H-P put
in the earliest allocators. Not used much, AFAIK, but it's there.


Having it in there doesn't mean it works very well. Effective large model
programs need careful management of which segments each function goes in
to, when things can be near, when things can be referred to by __ss
pointers, etc. Templates aren't conducive to this, and neither is just
throwing in a near allocator.


Right. But sometimes it works *well enough*.
Although DMC++ does
implement exception handling for 16 bit DOS, even that is more of a
technological feat than a practical one - due to size constraints, I'd
recommend using the older error code technique instead.


We provide a simplified mechanism for those who choose to compile
with execptions disabled, but once again it's a matter of taste.
And particular needs.


I'm curious what 16 bit C++ compiler you're using that supports exception
handling.


I defer to our OEMs.
Exception handling, STL, etc., are much more practical in a 32 bit
system.

More precisely, *large programs* are much more practical in 32-bit
systems. Neither exception handling, nor STL, nor etc. are intrinsically
too big to be of use in some programs for 16-bit processors.


A large part of the effort in developing 16 bit programs was always spent
trying to squeeze the size down. Exception handling adds a big chunk of
size, which will just make it that much harder, and so will actually
reduce the complexity of a program you can build for 16 bits.


Not necessarily. You can trade time vs. space for exception handling, and
I've seen both extremes.
STL adds another chunk of size, if only because it doesn't allow tuning of
near/far. I'm not as convinced of the lightweightness of STL as you are,
and iostreams in particular seems to add a huge amount of code even for
simple things.
Ah, I see part of the communication gap here. By STL *you* mean "the
Standard C library", while *I* mean "that set of containers and algorithms
based heavily on the Hewlett-Packard Standard Template Library". We avoid
the iostreams bloat by offering EC++ (as well as the full Standard C++
library), which looks more like the original cfront iostreams than the
full bore templated and internationalized thing that got standardized.
Our Abridged Library consists of EC++ with STL bolted on. That's what
I mean by "weightless" -- the presence of STL costs nothing unless you
use it.
Using C stdio for 16 bit programs is best because many
years were spent optimizing it to get the size down (some vendors even
implemented printf entirely in assembler!), and such effort was never
expended on iostreams.


Well, it was by us. I agree that stdio can be smaller, particularly if
you use a bespoke printf that omits floating-point when you don't need
it. But once again, EC++ has proved repeatedly to be *small enough*.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Apr 24 '06 #28

P: n/a
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:eJ******************************@comcast.com. ..
P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
STL isn't very usable for 16 bit code, it's just too big.

News to me, and to many of our embedded customers. STL itself is
weightless. How big the memory footprint is depends on just those
functions you choose to use.


One of the problems templates have (and STL is thoroughly based on
templates) is that it can go too far with customization, thereby
generating bloat. For example, I use a (non-template) linked list package
that creates a list of 'int' items. I can use it to store lists of
unsigned, shorts, unsigned shorts, char types, near pointers, etc.,
without adding any code. But if it was templatetized, a separate
implementation would be generated for each type.

This isn't a problem for 32 bit code generation, where there's lots of
room for the extra code. But it *is* a problem for 16 bit code, where your
code and data have to fit in 640Kb.


Yep, that's the standard bogey man trotted out by people leery of
templates. In real life, most people don't use eleven different map
types, with eleven versions of tree-walking code. In a sub-megabyte
program, it's not likely they'll need even two. And in real life,
very little code in STL benefits from distilling out in parameter
independent form. As always, the rule should be, try it first to see
if it's *good enough*. If so, you're done, and way earlier in the day.
Don't optimize for speed or space until you know you have to.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Apr 24 '06 #29

P: n/a
P.J. Plauger wrote:
Another problem is it has no accommodation for near/far.
Also news to me. We've preserved the near/far notation that H-P put
in the earliest allocators. Not used much, AFAIK, but it's there. Having it in there doesn't mean it works very well. Effective large model
programs need careful management of which segments each function goes in
to, when things can be near, when things can be referred to by __ss
pointers, etc. Templates aren't conducive to this, and neither is just
throwing in a near allocator.

Right. But sometimes it works *well enough*.


I remember well the 80's. Lots of people ported unix utilities to 16 bit
DOS. Those utilities were designed for 32 bit flat code, and whether
they worked "well enough" on DOS was certainly a matter of opinion. They
usually got stomped by utilities and applications that were custom
crafted for the quirks of 16 bit computing.
Although DMC++ does
implement exception handling for 16 bit DOS, even that is more of a
technological feat than a practical one - due to size constraints, I'd
recommend using the older error code technique instead.
We provide a simplified mechanism for those who choose to compile
with execptions disabled, but once again it's a matter of taste.
And particular needs.

I'm curious what 16 bit C++ compiler you're using that supports exception
handling.


I defer to our OEMs.


I ask the question because I don't know of any 16 bit C++ compiler that
supports either modern templates or exception handling, besides Digital
Mars C++.

Exception handling, STL, etc., are much more practical in a 32 bit
system.
More precisely, *large programs* are much more practical in 32-bit
systems. Neither exception handling, nor STL, nor etc. are intrinsically
too big to be of use in some programs for 16-bit processors.

A large part of the effort in developing 16 bit programs was always spent
trying to squeeze the size down. Exception handling adds a big chunk of
size, which will just make it that much harder, and so will actually
reduce the complexity of a program you can build for 16 bits.

Not necessarily. You can trade time vs. space for exception handling, and
I've seen both extremes.


The two main schemes for doing exception handling are:

1) Microsoft style, where runtime code is inserted to keep track of
where one is in a table of destructors that would need to be unwound

2) Linux style, where the PC is compared against a static table of
addresses to determine where in the table one is

Both involve the addition of a considerable chunk of code (1) or data (1
and 2). Under (2), that chunk consists of data that isn't actually
needed unless an exception is thrown. This is an efficient
implementation under a system that has demand paged virtual memory,
where executables' pages are only loaded from disk if the address is
actually referenced.

This is not the case for 16 bit DOS, which *always* loads the entire
executable into memory. DOS doesn't have demand paged virtual memory. 32
bit DOS extenders do add demand paged virtual memory, but only for 32
bit code, not 16 bit.

Hence, the exception handling bloat is always taking away space from
that precious 640Kb of memory. I suppose it is possible for the
compiler/linker to write the exception handling tables out to a separate
file, but I've never heard of an implementation that did that.

STL adds another chunk of size, if only because it doesn't allow tuning of
near/far. I'm not as convinced of the lightweightness of STL as you are,
and iostreams in particular seems to add a huge amount of code even for
simple things.

Ah, I see part of the communication gap here. By STL *you* mean "the
Standard C library", while *I* mean "that set of containers and algorithms
based heavily on the Hewlett-Packard Standard Template Library".


I mean STL as in "C++ Standard Template Library."
We avoid
the iostreams bloat by offering EC++ (as well as the full Standard C++
library), which looks more like the original cfront iostreams than the
full bore templated and internationalized thing that got standardized.
Our Abridged Library consists of EC++ with STL bolted on.


Digital Mars C++ for 16 bits does offer both of the two older
implementations of iostreams (iostreams went through a couple major
redesigns before being standardized). These work tolerably well on 16
bit platforms, but they are not Standard C++ iostreams by any stretch of
the imagination.

Using C stdio for 16 bit programs is best because many
years were spent optimizing it to get the size down (some vendors even
implemented printf entirely in assembler!), and such effort was never
expended on iostreams.

Well, it was by us. I agree that stdio can be smaller, particularly if
you use a bespoke printf that omits floating-point when you don't need
it. But once again, EC++ has proved repeatedly to be *small enough*.


From http://www.dinkumware.com/embed9710.html:
-----------------------------------
What's Not in Embedded C++
Embedded C++ is a library specification and a minimum language
specification. The minimum language specification is a proper subset of
C++, omitting:

multiple inheritance and virtual base classes
runtime type identification
templates
exceptions
namespaces
new-style casts
------------------------------------

EC++ being practical for 16 bit targets does not imply that templates
and exception handling are. EC++ is kinda what C++ was back in 1991 or
so, when it worked well on 16 bit targets.

Do you know anyone using STL (Standard Template Library) for 16 bit X86
programming? I would be surprised if there were any. I looked around on
the Dinkumware site, but didn't find anything specifically mentioning 16
bit support or any particular 16 bit C++ compilers, but perhaps I missed it.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
Apr 24 '06 #30

P: n/a
P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:eJ******************************@comcast.com. ..
P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
STL isn't very usable for 16 bit code, it's just too big.
News to me, and to many of our embedded customers. STL itself is
weightless. How big the memory footprint is depends on just those
functions you choose to use. One of the problems templates have (and STL is thoroughly based on
templates) is that it can go too far with customization, thereby
generating bloat. For example, I use a (non-template) linked list package
that creates a list of 'int' items. I can use it to store lists of
unsigned, shorts, unsigned shorts, char types, near pointers, etc.,
without adding any code. But if it was templatetized, a separate
implementation would be generated for each type.

This isn't a problem for 32 bit code generation, where there's lots of
room for the extra code. But it *is* a problem for 16 bit code, where your
code and data have to fit in 640Kb.


Yep, that's the standard bogey man trotted out by people leery of
templates. In real life, most people don't use eleven different map
types, with eleven versions of tree-walking code. In a sub-megabyte
program, it's not likely they'll need even two.


In today's world, a sub-megabyte program is a trivial program, and I
would agree with you. But in the 16 bit DOS days, this was not true at
all. A 250K program could be extremely complex. My compiler, for
example, had to be split into 3 passes, and there was lots of various
list types and tree-walking code in it, and it benefited substantially
(and critically) from being able to reuse existing object code as much
as possible. Reusing source code (what templates do) was relatively not
so important.

And in real life,
very little code in STL benefits from distilling out in parameter
independent form. As always, the rule should be, try it first to see
if it's *good enough*. If so, you're done, and way earlier in the day.
Don't optimize for speed or space until you know you have to.


That is a good rule. But in the 16 bit DOS world, you have to start
optimizing for speed/space often right out of the gate, as the limits
were reached very quickly for non-trivial programs. If your program was
going to use more than 64K of data, you had to design that in from the
start, not retrofit it in later. Programs were also far more sensitive
to such optimizations then than today - I don't believe languages like
Ruby or Python would have enjoyed widespread success on those machines.
And remember that early Java implementation - the UCSD P-system? There
was a setup years before its time.
Apr 24 '06 #31

P: n/a
Rod Pemberton wrote:
If you use DEGFX instead of Allegro, under the DEGFX directories there is
a DJGPP directory. There are four files. Three are small. I think, but
am not sure, that these are the only files that need ported. It appears
to me that these are mostly DPMI calls or below 1Mb memory accesses
(farpeek's etc..). It's fairly straightforward but time consuming to
port these. I also see some packed structs and use of DJGPP transfer
buffer.
.................

DJGPP packed structs would need to be rewritten:
.................

The DJGPP transfer buffer can be setup for PM Watcom, get_dos_mem() is
called to setup __tb, and free_dos_mem() when done:
..................

Tanks for info. I'll see what I can do.

Steve
Apr 25 '06 #32

P: n/a

"Steve" <St**@nospam.com> wrote in message
news:e2**********@news.eunet.yu...
Rod Pemberton wrote:
If you use DEGFX instead of Allegro, under the DEGFX directories there is a DJGPP directory. There are four files. Three are small. I think, but am not sure, that these are the only files that need ported. It appears
to me that these are mostly DPMI calls or below 1Mb memory accesses
(farpeek's etc..). It's fairly straightforward but time consuming to
port these. I also see some packed structs and use of DJGPP transfer
buffer.
.................

DJGPP packed structs would need to be rewritten:
.................

The DJGPP transfer buffer can be setup for PM Watcom, get_dos_mem() is
called to setup __tb, and free_dos_mem() when done:
..................

Tanks for info. I'll see what I can do.


I ported an early version of Chris Giese's LBA routines from DJGPP to OW.
It should show you the differences between calling the DJGPP __dpmi
functions and using the DPMI RMI structure for OW. It also has the _tb
code, etc...

http://www.openwatcom.org/index.php/...LBA_under_DPMI
Rod Pemberton
Apr 25 '06 #33

P: n/a
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:WN******************************@comcast.com. ..
P.J. Plauger wrote:
> Another problem is it has no accommodation for near/far.
Also news to me. We've preserved the near/far notation that H-P put
in the earliest allocators. Not used much, AFAIK, but it's there.
Having it in there doesn't mean it works very well. Effective large
model programs need careful management of which segments each function
goes in to, when things can be near, when things can be referred to by
__ss pointers, etc. Templates aren't conducive to this, and neither is
just throwing in a near allocator. Right. But sometimes it works *well enough*.


I remember well the 80's. Lots of people ported unix utilities to 16 bit
DOS. Those utilities were designed for 32 bit flat code, and whether they
worked "well enough" on DOS was certainly a matter of opinion. They
usually got stomped by utilities and applications that were custom crafted
for the quirks of 16 bit computing.


As the guy who did the first rewrite of Unix, I can attest that it ran
just fine on 16-bit computers. We also ported our utilities to other
platforms, including DOS. To this day, I still use quite a few of those
utilities in house to build the packages we ship. So IMO they work
"well enough". YMMV.
> Although DMC++ does
> implement exception handling for 16 bit DOS, even that is more of a
> technological feat than a practical one - due to size constraints, I'd
> recommend using the older error code technique instead.
We provide a simplified mechanism for those who choose to compile
with execptions disabled, but once again it's a matter of taste.
And particular needs.
I'm curious what 16 bit C++ compiler you're using that supports
exception handling.


I defer to our OEMs.


I ask the question because I don't know of any 16 bit C++ compiler that
supports either modern templates or exception handling, besides Digital
Mars C++.


See http://www.iar.com, by way of example. They use the EDG front end, our
EC++ and Abridged libraries, and a host of their own 8-, 16-, and 32-bit
back ends. The Abridged Library supports templates, which don't require
back-end support (other than huge names in the linker). I don't know which
IAR back ends support exceptions.

IAR is one of about a dozen of our OEM customers who supply C/C++
compilers for the embedded marketplace.
> Exception handling, STL, etc., are much more practical in a 32 bit
> system.
More precisely, *large programs* are much more practical in 32-bit
systems. Neither exception handling, nor STL, nor etc. are
intrinsically
too big to be of use in some programs for 16-bit processors.
A large part of the effort in developing 16 bit programs was always
spent trying to squeeze the size down. Exception handling adds a big
chunk of size, which will just make it that much harder, and so will
actually reduce the complexity of a program you can build for 16 bits.

Not necessarily. You can trade time vs. space for exception handling, and
I've seen both extremes.


The two main schemes for doing exception handling are:

1) Microsoft style, where runtime code is inserted to keep track of where
one is in a table of destructors that would need to be unwound

2) Linux style, where the PC is compared against a static table of
addresses to determine where in the table one is

Both involve the addition of a considerable chunk of code (1) or data (1
and 2). Under (2), that chunk consists of data that isn't actually needed
unless an exception is thrown. This is an efficient implementation under a
system that has demand paged virtual memory, where executables' pages are
only loaded from disk if the address is actually referenced.

This is not the case for 16 bit DOS, which *always* loads the entire
executable into memory. DOS doesn't have demand paged virtual memory. 32
bit DOS extenders do add demand paged virtual memory, but only for 32 bit
code, not 16 bit.

Hence, the exception handling bloat is always taking away space from that
precious 640Kb of memory. I suppose it is possible for the compiler/linker
to write the exception handling tables out to a separate file, but I've
never heard of an implementation that did that.


Right. All I'm challenging is whether your "considerable chunk" of "bloat"
is so excessive as to make C++ completely unusable in the sub-megabyte
domain.
STL adds another chunk of size, if only because it doesn't allow tuning
of near/far. I'm not as convinced of the lightweightness of STL as you
are, and iostreams in particular seems to add a huge amount of code even
for simple things.

Ah, I see part of the communication gap here. By STL *you* mean "the
Standard C library", while *I* mean "that set of containers and
algorithms
based heavily on the Hewlett-Packard Standard Template Library".


I mean STL as in "C++ Standard Template Library."


Then why do you refer to "iostreams in particular", which is not a part
of STL?
We avoid
the iostreams bloat by offering EC++ (as well as the full Standard C++
library), which looks more like the original cfront iostreams than the
full bore templated and internationalized thing that got standardized.
Our Abridged Library consists of EC++ with STL bolted on.


Digital Mars C++ for 16 bits does offer both of the two older
implementations of iostreams (iostreams went through a couple major
redesigns before being standardized). These work tolerably well on 16 bit
platforms, but they are not Standard C++ iostreams by any stretch of the
imagination.


Whereas istream/ostream/fstream etc. in EC++ is often indistinguishable
from the Standard C++ version. It is, in fact, the subset of iostreams
that most people use most of the time.
Using C stdio for 16 bit programs is best because many
years were spent optimizing it to get the size down (some vendors even
implemented printf entirely in assembler!), and such effort was never
expended on iostreams. Well, it was by us. I agree that stdio can be smaller, particularly if
you use a bespoke printf that omits floating-point when you don't need
it. But once again, EC++ has proved repeatedly to be *small enough*.


From http://www.dinkumware.com/embed9710.html:
-----------------------------------
What's Not in Embedded C++
Embedded C++ is a library specification and a minimum language
specification. The minimum language specification is a proper subset of
C++, omitting:

multiple inheritance and virtual base classes
runtime type identification
templates
exceptions
namespaces
new-style casts
------------------------------------

EC++ being practical for 16 bit targets does not imply that templates and
exception handling are. EC++ is kinda what C++ was back in 1991 or so,
when it worked well on 16 bit targets.


You've described EC++, as specified in 1997. It restricted the language
to give existing (pre-standard) C++ compilers a fighting chance. But
the existence of off-the-shelf complete front ends like EDG have made
that aspect of EC++ way less important. Our most popular embedded
product is the Abridged Library, which relaxes *all* of the above
language restrictions. It's the Standard C++ library that eats space
and time, so the simplified EC++ library iostreams, string, etc. offer
the most significant savings.
Do you know anyone using STL (Standard Template Library) for 16 bit X86
programming? I would be surprised if there were any. I looked around on
the Dinkumware site, but didn't find anything specifically mentioning 16
bit support or any particular 16 bit C++ compilers, but perhaps I missed
it.


See above.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Apr 25 '06 #34

P: n/a
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:k6******************************@comcast.com. ..
P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:eJ******************************@comcast.com. ..
P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
> STL isn't very usable for 16 bit code, it's just too big.
News to me, and to many of our embedded customers. STL itself is
weightless. How big the memory footprint is depends on just those
functions you choose to use.
One of the problems templates have (and STL is thoroughly based on
templates) is that it can go too far with customization, thereby
generating bloat. For example, I use a (non-template) linked list
package that creates a list of 'int' items. I can use it to store lists
of unsigned, shorts, unsigned shorts, char types, near pointers, etc.,
without adding any code. But if it was templatetized, a separate
implementation would be generated for each type.

This isn't a problem for 32 bit code generation, where there's lots of
room for the extra code. But it *is* a problem for 16 bit code, where
your code and data have to fit in 640Kb.
Yep, that's the standard bogey man trotted out by people leery of
templates. In real life, most people don't use eleven different map
types, with eleven versions of tree-walking code. In a sub-megabyte
program, it's not likely they'll need even two.


In today's world, a sub-megabyte program is a trivial program, and I would
agree with you. But in the 16 bit DOS days, this was not true at all. A
250K program could be extremely complex.


Huh? Why does a 250KB program suddenly get less complex? I agree that
code now freely sprawls because memory is so extensive and so cheap,
but it doesn't follow that a small program now *has* to be simpler than
20 years ago.
My compiler, for example,
had to be split into 3 passes, and there was lots of various list types
and tree-walking code in it, and it benefited substantially (and
critically) from being able to reuse existing object code as much as
possible. Reusing source code (what templates do) was relatively not so
important.
Huh again? If it's important, you do it. If it's not, and it costs you
productivity, you don't. Even today you can make one unified list type
do the work of two or three *if that is important to your code size*.
You get bloat only if you indulge in bloat (and you can afford it).
And in real life,
very little code in STL benefits from distilling out in parameter
independent form. As always, the rule should be, try it first to see
if it's *good enough*. If so, you're done, and way earlier in the day.
Don't optimize for speed or space until you know you have to.


That is a good rule. But in the 16 bit DOS world, you have to start
optimizing for speed/space often right out of the gate, as the limits were
reached very quickly for non-trivial programs.


But you "optimize" by picking a program design that fits the box, not
by fretting over potential code bloat that may or may not matter.
If your program was
going to use more than 64K of data, you had to design that in from the
start, not retrofit it in later. Programs were also far more sensitive to
such optimizations then than today - I don't believe languages like Ruby
or Python would have enjoyed widespread success on those machines. And
remember that early Java implementation - the UCSD P-system? There was a
setup years before its time.


We obviously have a different aesthetic, since I consider the P-system
an idea whose time had come and gone before it really hit the ground.
(Remember Softech Microsystems?) But that's wandering afield. The point
of this respons is, there's nothing intrinsic in exceptions, templates,
or C++ in general that prohibits their use in sub-megabyte systems.
Back in the 1980s people were still fretting over the 5-15 per cent
overhead you get when writing in C instead of assembler. C won, mostly
(IMO) because of the much greater productivity and in part because of
the steady increase in memory size and the steady decrease in memory
cost.

Now some people in the embedded world are fretting because of the
additional 10-20 per cent overhead when writing in C++ instead of C.
Memory is dirt cheap, so it's primarily architectural limitations (like
address size) that cause problems. If that overhead pushes you from a
16-bit to a 32-bit architecture, it's worth worrying about. Otherwise,
time to market trumps any piddling extra cost in storage, yes even
when you're making 10 million of 'em. Choice of programming language
is rarely black and white.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Apr 25 '06 #35

P: n/a

P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:k6******************************@comcast.com. ..
If your program was
going to use more than 64K of data, you had to design that in from the
start, not retrofit it in later. Programs were also far more sensitive to
such optimizations then than today - I don't believe languages like Ruby
or Python would have enjoyed widespread success on those machines. And
remember that early Java implementation - the UCSD P-system? There was a
setup years before its time.


We obviously have a different aesthetic, since I consider the P-system
an idea whose time had come and gone before it really hit the ground.
(Remember Softech Microsystems?) But that's wandering afield. The point


Hasn't hit the ground? That's an interesting viewpoint, in this day
and age of Java and .NET.

[snip]
Otherwise,
time to market trumps any piddling extra cost in storage, yes even
when you're making 10 million of 'em.


Maybe in niche markets. But in a competitive market, you'll need more
than just good enough. I think what illustrates the difference in
programming 16 bit vs. 32 bit, was when Lotus trounced all competitors
by writing their spreadsheet in 100% assembler, and wrote directly to
the video system, so as to extract every bit of power available from
the machine. In fact, one of their competitors was a P-system based
spreadsheet, Context MBA.

Apr 25 '06 #36

P: n/a
"Michael O'Keeffe" <mi*******@gmail.com> wrote in message
news:11*********************@j33g2000cwa.googlegro ups.com...
P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:k6******************************@comcast.com. ..
> If your program was
> going to use more than 64K of data, you had to design that in from the
> start, not retrofit it in later. Programs were also far more sensitive
> to
> such optimizations then than today - I don't believe languages like
> Ruby
> or Python would have enjoyed widespread success on those machines. And
> remember that early Java implementation - the UCSD P-system? There was
> a
> setup years before its time.


We obviously have a different aesthetic, since I consider the P-system
an idea whose time had come and gone before it really hit the ground.
(Remember Softech Microsystems?) But that's wandering afield. The point


Hasn't hit the ground? That's an interesting viewpoint, in this day
and age of Java and .NET.


The p-system was a failure for three big reasons (IMO):

1) It didn't have adequate performance on the processors of its time.

2) The interpreter, on a 16-bit system, left even less space for a
program.

3) It didn't deliver the one big thing you should get in trade for
the above -- adequate portability -- because the p-code didn't hide
from the endianness of the target platform.

Obviously, Java and .NET have avoided these problems and have each
established an important niche. The UCSD p-system made a splash that
lasted just a few years, by comparison. I stand by what I said.
Otherwise,
time to market trumps any piddling extra cost in storage, yes even
when you're making 10 million of 'em.


Maybe in niche markets. But in a competitive market, you'll need more
than just good enough.


Sorry, but in today's competitive marketplace any given "release" of
an embedded product might well sell for just a year or two. Plenty of
time and opportunity to fix the bugs for the next improvement, provided
you have a market for it. If you're three months late to that market,
however...

How well do you think the first iPod would compete if it were released
today?
I think what illustrates the difference in
programming 16 bit vs. 32 bit, was when Lotus trounced all competitors
by writing their spreadsheet in 100% assembler, and wrote directly to
the video system, so as to extract every bit of power available from
the machine. In fact, one of their competitors was a P-system based
spreadsheet, Context MBA.


Agreed. Lotus had to be written in assembly in those days to be "good
enough". That doesn't alter my basic point.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Apr 25 '06 #37

P: n/a
Have you heard of Linux? Solaris? Um, MIPS? Do you mean the only
Microsoft product that can give you accurate timing and access to low
level hardware? You can get access to low level hardware (registers,
buses, etc) with windows languages, like C and C++. I'm actually
confused, perhaps I misunderstood the lessons I learned over the last
ten years...Could you explain?

(yes, off topic, but I'm really confused).

-Benry
Steve wrote:
Thank you all for your answers.

It seems I should take a look at Watcom compiler first. They have many
target platforms and still seem very active. I assume that no latest
Microsoft or Borland C++ compilers support DOS program development. Now I
need to find some good graphical (not textual) windowing menu system library
that would, hopefully, work with Watcom. If you have some suggestion about
it, please, I would like to hear it.

Still, it was frustrating. I don't understand almost complete abandonment of
DOS program development in commercial compiler world. DOS is still, and will
be for the long time, the only platform for all programs that must have
exclusive access to hardware, or that must work alone for other reasons.
Real time applications and accurate timing is only possible in DOS. It is
ideal for hardware testing, as the controller platform, or PC hardware
malfunction detection, analysis and repair. If any important PC component is
malfunctioning, you better not try to boot other operating system because
hard disk data integrity may be compromised. Again, you must boot DOS and
run some hardware analysis program. But it is becoming harder and harder to
write programs for DOS!

Anyway, are you aware of some specialized news conference or blogs devoted
to writing various PC hardware test programs?

Steve.


Apr 25 '06 #38

P: n/a
P.J. Plauger wrote:
As the guy who did the first rewrite of Unix, I can attest that it ran
just fine on 16-bit computers. We also ported our utilities to other
platforms, including DOS. To this day, I still use quite a few of those
utilities in house to build the packages we ship. So IMO they work
"well enough". YMMV.
I'm pretty sure that although you may still be using those programs, you
aren't using them on 16 bit DOS <g>.
I ask the question because I don't know of any 16 bit C++ compiler that
supports either modern templates or exception handling, besides Digital
Mars C++.


See http://www.iar.com, by way of example. They use the EDG front end, our
EC++ and Abridged libraries, and a host of their own 8-, 16-, and 32-bit
back ends. The Abridged Library supports templates, which don't require
back-end support (other than huge names in the linker). I don't know which
IAR back ends support exceptions.

IAR is one of about a dozen of our OEM customers who supply C/C++
compilers for the embedded marketplace.


IAR doesn't seem to support 16 bit X86 - at least they don't list it on
their web site. Their page entitled "Extended Embedded C++" makes it
pretty clear they do not support exception handling, multiple
inheritance, or RTTI. They do support templates as well as
being "memory attribute aware", which is not elaborated.

Hence, the exception handling bloat is always taking away space from that
precious 640Kb of memory. I suppose it is possible for the compiler/linker
to write the exception handling tables out to a separate file, but I've
never heard of an implementation that did that.

Right. All I'm challenging is whether your "considerable chunk" of "bloat"
is so excessive as to make C++ completely unusable in the sub-megabyte
domain.


I didn't say "completely unusable", though I will say it is impractical.
As evidence, no compiler (other than Digital Mars C++) seems to have
implemented it for 16 bit code. IAR is using the EDG front end, which
supports EH, but have apparently *removed* support for it for their 16
bit targets.

I mean STL as in "C++ Standard Template Library."

Then why do you refer to "iostreams in particular", which is not a part
of STL?


I've always considered it part of STL, after all, it is part of STLPort
(which is the STL that Digital Mars ships). If there is an official
definition of STL which excludes iostreams, so be it.

EC++ being practical for 16 bit targets does not imply that templates and
exception handling are. EC++ is kinda what C++ was back in 1991 or so,
when it worked well on 16 bit targets.

You've described EC++, as specified in 1997. It restricted the language
to give existing (pre-standard) C++ compilers a fighting chance. But
the existence of off-the-shelf complete front ends like EDG have made
that aspect of EC++ way less important. Our most popular embedded
product is the Abridged Library, which relaxes *all* of the above
language restrictions. It's the Standard C++ library that eats space
and time, so the simplified EC++ library iostreams, string, etc. offer
the most significant savings.


Ok, but IAR doesn't support exception handling, RTTI, or multiple
inheritance for 16 bit targets (they do support templates). Do you know
anyone (besides Digital Mars C++) that does?
Do you know anyone using STL (Standard Template Library) for 16 bit X86
programming? I would be surprised if there were any. I looked around on
the Dinkumware site, but didn't find anything specifically mentioning 16
bit support or any particular 16 bit C++ compilers, but perhaps I missed
it.


See above.


I checked the web site www.iar.com. They do not list X86 as a supported
target for their C/C++ compilers.

But maybe I am all wrong. If there is a demand for 16 bit X86 compilers
that support exception handling, RTTI, multiple inheritance, etc., I'd
certainly be pleased to work with Dinkumware to fill it.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
Apr 25 '06 #39

P: n/a
P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:k6******************************@comcast.com. ..
In today's world, a sub-megabyte program is a trivial program, and I would
agree with you. But in the 16 bit DOS days, this was not true at all. A
250K program could be extremely complex. Huh? Why does a 250KB program suddenly get less complex? I agree that
code now freely sprawls because memory is so extensive and so cheap,
but it doesn't follow that a small program now *has* to be simpler than
20 years ago.


It usually is because of the standard bloat brought in by the C++
runtime library. Once you start supporting locales, wide characters,
exceptions, etc., or linking to some other library, big chunks of code
get pulled in, and so even fairly simple programs are pretty fat
compared with programs of similar size in the DOS daze.
We obviously have a different aesthetic, since I consider the P-system
an idea whose time had come and gone before it really hit the ground.
It was before its time because its performance was so poor on the old
processors. What made the idea workable in the 90's was 100x processor
speed improvements. What sealed the deal for Java was the emergence of
the JIT (Just In Time) compiler for Java (first invented by Symantec).
(Remember Softech Microsystems?) But that's wandering afield. The point
of this respons is, there's nothing intrinsic in exceptions, templates,
or C++ in general that prohibits their use in sub-megabyte systems.
Back in the 1980s people were still fretting over the 5-15 per cent
overhead you get when writing in C instead of assembler.
Actually, the cost of writing in C vs asm was about 40% for 16 bit code.
At least for an expert asm programmer. And being the (so far) only
implementer of exceptions, multiple inheritance, and RTTI on 16 bit DOS
I *know* it works. You're just not going to be able to write a program
approaching the complexity and capability of one not using such features.

C won, mostly
(IMO) because of the much greater productivity and in part because of
the steady increase in memory size and the steady decrease in memory
cost.
And improving processor speed. The most successful 16 bit DOS apps,
however, still tended to be written in assembler. Remember how Pkware
buried ARC? All pkware was was a hand optimized assembler version of
ARC. It was common to use a mix of asm and C. 32 bit processors have
pretty much killed off the need for writing in asm anymore.

BTW, another big reason that C won was because the C compilers of the
day were much, much better than the compilers for other languages. That,
for example, buried Pascal. By the time Pascal compilers got better, it
was too late.

Now some people in the embedded world are fretting because of the
additional 10-20 per cent overhead when writing in C++ instead of C.
Memory is dirt cheap, so it's primarily architectural limitations (like
address size) that cause problems. If that overhead pushes you from a
16-bit to a 32-bit architecture, it's worth worrying about. Otherwise,
time to market trumps any piddling extra cost in storage, yes even
when you're making 10 million of 'em. Choice of programming language
is rarely black and white.


I'm not referring to the cost of storage, I'm referring to the 640K
hardwired limitation. Any overhead that adds code size takes away from
the size of the data set the program can handle. This even applies to
simple utilities, like diff.

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
Apr 25 '06 #40

P: n/a
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:D4******************************@comcast.com. ..
P.J. Plauger wrote:
As the guy who did the first rewrite of Unix, I can attest that it ran
just fine on 16-bit computers. We also ported our utilities to other
platforms, including DOS. To this day, I still use quite a few of those
utilities in house to build the packages we ship. So IMO they work
"well enough". YMMV.


I'm pretty sure that although you may still be using those programs, you
aren't using them on 16 bit DOS <g>.


Wrong. They're .com files. And they're quite useful.
I ask the question because I don't know of any 16 bit C++ compiler that
supports either modern templates or exception handling, besides Digital
Mars C++.


See http://www.iar.com, by way of example. They use the EDG front end,
our
EC++ and Abridged libraries, and a host of their own 8-, 16-, and 32-bit
back ends. The Abridged Library supports templates, which don't require
back-end support (other than huge names in the linker). I don't know
which
IAR back ends support exceptions.

IAR is one of about a dozen of our OEM customers who supply C/C++
compilers for the embedded marketplace.


IAR doesn't seem to support 16 bit X86 - at least they don't list it on
their web site. Their page entitled "Extended Embedded C++" makes it
pretty clear they do not support exception handling, multiple inheritance,
or RTTI. They do support templates as well as
being "memory attribute aware", which is not elaborated.


My bet is they do support multiple inheritance, but what the heck. They
*do* support templates, code bloat and all.
Hence, the exception handling bloat is always taking away space from
that precious 640Kb of memory. I suppose it is possible for the
compiler/linker to write the exception handling tables out to a separate
file, but I've never heard of an implementation that did that.

Right. All I'm challenging is whether your "considerable chunk" of
"bloat"
is so excessive as to make C++ completely unusable in the sub-megabyte
domain.


I didn't say "completely unusable", though I will say it is impractical.
As evidence, no compiler (other than Digital Mars C++) seems to have
implemented it for 16 bit code. IAR is using the EDG front end, which
supports EH, but have apparently *removed* support for it for their 16 bit
targets.


Now wait, if it's impractical why do you keep boasting that you support it?
I mean STL as in "C++ Standard Template Library."

Then why do you refer to "iostreams in particular", which is not a part
of STL?


I've always considered it part of STL, after all, it is part of STLPort
(which is the STL that Digital Mars ships). If there is an official
definition of STL which excludes iostreams, so be it.


I gave it earlier -- the thing that came out of Hewlett-Packard.
EC++ being practical for 16 bit targets does not imply that templates
and exception handling are. EC++ is kinda what C++ was back in 1991 or
so, when it worked well on 16 bit targets.

You've described EC++, as specified in 1997. It restricted the language
to give existing (pre-standard) C++ compilers a fighting chance. But
the existence of off-the-shelf complete front ends like EDG have made
that aspect of EC++ way less important. Our most popular embedded
product is the Abridged Library, which relaxes *all* of the above
language restrictions. It's the Standard C++ library that eats space
and time, so the simplified EC++ library iostreams, string, etc. offer
the most significant savings.


Ok, but IAR doesn't support exception handling, RTTI, or multiple
inheritance for 16 bit targets (they do support templates). Do you know
anyone (besides Digital Mars C++) that does?


Like I said, I picked IAR off the top of my head as one of a dozen of
our compiler OEM customers who license the Abridged Library. It works
properly with or without exceptions, so it's up to the vendor whether
to turn exceptions on.
Do you know anyone using STL (Standard Template Library) for 16 bit X86
programming? I would be surprised if there were any. I looked around on
the Dinkumware site, but didn't find anything specifically mentioning 16
bit support or any particular 16 bit C++ compilers, but perhaps I missed
it.


See above.


I checked the web site www.iar.com. They do not list X86 as a supported
target for their C/C++ compilers.

But maybe I am all wrong. If there is a demand for 16 bit X86 compilers
that support exception handling, RTTI, multiple inheritance, etc., I'd
certainly be pleased to work with Dinkumware to fill it.


Not sure how much "work" we have to do in that department. It's all
kinda there.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Apr 25 '06 #41

P: n/a
On Tue, 25 Apr 2006 12:34:39 -0700 Walter Bright
<wa****@digitalmars-nospamm.com> waved a wand and this message
magically appeared:
BTW, another big reason that C won was because the C compilers of the
day were much, much better than the compilers for other languages.
That, for example, buried Pascal. By the time Pascal compilers got
better, it was too late.


You're correct - I just tested it with TurboPascal 1.05a, TurboPascal
3.02a and TurboPascal 5.5, with 'Hello world' produces:

1.05a: 8,803 bytes
3.02a: 11,411 bytes
5.5: 1,840 bytes (quite remarkable!)

Too little, too late it would seem!

--
http://www.munted.org.uk

Take a nap, it saves lives.
Apr 25 '06 #42

P: n/a
Alex Buell wrote:
On Tue, 25 Apr 2006 12:34:39 -0700 Walter Bright
<wa****@digitalmars-nospamm.com> waved a wand and this message
magically appeared:
BTW, another big reason that C won was because the C compilers of the
day were much, much better than the compilers for other languages.
That, for example, buried Pascal. By the time Pascal compilers got
better, it was too late.


You're correct - I just tested it with TurboPascal 1.05a, TurboPascal
3.02a and TurboPascal 5.5, with 'Hello world' produces:

1.05a: 8,803 bytes
3.02a: 11,411 bytes
5.5: 1,840 bytes (quite remarkable!)

Too little, too late it would seem!


There were Pascal compilers for the PC before TP, and they were
uniformly terrible even by the standards of the day. They drove a lot of
people to look at C instead.
Apr 25 '06 #43

P: n/a
P.J. Plauger wrote:
"Walter Bright" <wa****@digitalmars-nospamm.com> wrote in message
news:D4******************************@comcast.com. ..
P.J. Plauger wrote:
As the guy who did the first rewrite of Unix, I can attest that it ran
just fine on 16-bit computers. We also ported our utilities to other
platforms, including DOS. To this day, I still use quite a few of those
utilities in house to build the packages we ship. So IMO they work
"well enough". YMMV. I'm pretty sure that although you may still be using those programs, you
aren't using them on 16 bit DOS <g>.


Wrong. They're .com files. And they're quite useful.


You are running build tools on a 16 bit DOS system? I'm amazed. But what
I think you meant is you're running them on Win32's DOS emulator. Long
ago, I recompiled my .com utilities to be native Win32 programs. They
run a lot better that way <g>. For example, DOS programs still suffer
from the wretched 8.3 uppercase filename limitations, and who would want
that on their Win32 system? And if they're in portable C or C++, why not
recompile them?

True .com files have a maximum of 64K of code and stack and data, total.
However, one could rename any .exe file to ".com" and it would work, and
one can even rename Win32 exe files to ".com" and they'll still work,
too, but they aren't really com files.

(I also find the Win32 DOS emulator to be rather slow, another good
reason to recompile those 16 bit programs to native 32 bits.)

My bet is they do support multiple inheritance, but what the heck. They
*do* support templates, code bloat and all.


Yes, though it would be interesting to see what use 16 bit customers
make of templates. It's often necessary for marketing reasons to support
things even if they aren't used.

I didn't say "completely unusable", though I will say it is impractical.
As evidence, no compiler (other than Digital Mars C++) seems to have
implemented it for 16 bit code. IAR is using the EDG front end, which
supports EH, but have apparently *removed* support for it for their 16 bit
targets.


Now wait, if it's impractical why do you keep boasting that you support it?


Because:

1) At the time, a lot of customers said they wanted it. It certainly was
worth implementing as it's not so easy to see in advance how things will
work out.

2) It's easy to support (given the support for it for DOS32, where it is
very useful).

3) There's no reason to go in and break it. Support for it can be turned
on and off with a switch.

4) Whether a customer wants it is up to them. I see no reason to make
the decision for them. Sometimes I do provide some advice to those who
don't realize the tradeoffs involved with 16 bit code. After all, I like
to see them succeed with their projects.

5) Having an existing implementation means we aren't having a
hypothetical discussion here. You disagree with me and believe it is
practical. At some point it is useless to discuss this anymore; if you
want to use it in your next 16 bit DOS project, it's there for you to
try out.

6) I'm proud of the implementation, even if it turned out to be less
useful than I'd hoped.

7) I try to be realistic about it. If I was just some boasting marketing
droid, why would I say that a Standard C++ feature exclusively
implemented by Digital Mars is not so practical? I'd be saying it's the
greatest thing ever!

-Walter Bright
www.digitalmars.com C, C++, D programming language compilers
Apr 25 '06 #44

P: n/a
On Tue, 25 Apr 2006 15:36:31 -0700 Walter Bright
<wa****@digitalmars-nospamm.com> waved a wand and this message
magically appeared:
There were Pascal compilers for the PC before TP, and they were
uniformly terrible even by the standards of the day. They drove a lot
of people to look at C instead.


Unfortunately DMC 8.47 loses out to OpenWatcom C/C++ 1.4. Here's a
simple test compiling a 16 bit DOS program ("Hello World!")

dmc -msd hello.c gives: 12,362 bytes
wcl -bcl=dos -ms gives: 8,702 bytes

Sorry, pal...

--
http://www.munted.org.uk

Take a nap, it saves lives.
Apr 26 '06 #45

P: n/a
Benry wrote:
Have you heard of Linux? Solaris? Um, MIPS? Do you mean the only
Microsoft product that can give you accurate timing and access to low
level hardware? You can get access to low level hardware (registers,
buses, etc) with windows languages, like C and C++. I'm actually
confused, perhaps I misunderstood the lessons I learned over the last
ten years...Could you explain?

(yes, off topic, but I'm really confused).


My main focus is hardware, servicing PC computers. Target for these programs
is PC with some kind of problem. It can be corrupted OS or hardware. First,
you must find which of those two is it, ideally with some other software,
not your original OS. That is the first place you need hardware test
independent of OS. Than, if it is hardware, what is wrong with it and where?
OS usually can't boot, and if it can, you have to boot it vast number of
times while you are replacing suspect components. If you do it with some
general purpose OS, that is extremely long and unpleasant experience. OS is
unstable, configuration is constantly changing, you have number of
meaningless errors, and you are never sure at which point you have finally
corrupted your OS (registry or configuration data). That means that at some
point you may have two or more independent errors at the same time, which
destroys replacement procedure logic, and you may never know when you have
replaced bad component. Moral of the story is that any high level OS is
almost completely useless.

So, I do not need Microsoft. I can use free DOS clone :) Actually, I be
happier to write programs at the bios level, or even to avoid bios also
(except boot). But that is overkill, and I loose some useful software,
hardware drivers and test flexibility. DOS level of complexity, small size
(bad memory locations case!), and nonmultitasking design is ideal.
That was target platform OS considerations.

If we talk about development platform OS, that depend on what OS you know
and have at home and various places where you work, or will work in the
future. At all those places I have only Windows. Because of that, I know it
(and must know it) well, and I know much less about Linux. Working in two
different sets of high-level software environments, and some low level,
would be too much. Windows is most common OS and that basically determines
everything else. It is not the question of OS performance or my preference.

Steve

Apr 26 '06 #46

P: n/a
"P.J. Plauger" <pj*@dinkumware.com> wrote:
We use mingw with our libraries as a convenient test bed. It offers
a free, reasonably current gcc that lets us compile large-model programs
under DOS.


Doesn't MinGW run under Windows and produce Win32 executables?

Did you really get it to make DPMI (32-bit DOS) executables?
Like DJGPP? How?

Apr 26 '06 #47

P: n/a
In article
<20********************************@munted.org.uk> ,
al********@munted.org.uk says...

[ ... ]
dmc -msd hello.c gives: 12,362 bytes
wcl -bcl=dos -ms gives: 8,702 bytes


cl -AT hello.c gives: 5,532 bytes

If memory serves, when I was doing this regularly, I
could manage something like 1300 bytes, but I'm afraid
I've forgotten quite a few of the old DOS tricks...

--
Later,
Jerry.

The universe is a figment of its own imagination.
Apr 26 '06 #48

P: n/a
"Chris Giese" <No********@execpc.com> wrote in message
news:44***************@news.execpc.com...
"P.J. Plauger" <pj*@dinkumware.com> wrote:
We use mingw with our libraries as a convenient test bed. It offers
a free, reasonably current gcc that lets us compile large-model programs
under DOS.


Doesn't MinGW run under Windows and produce Win32 executables?

Did you really get it to make DPMI (32-bit DOS) executables?
Like DJGPP? How?


I didn't. I misspoke. Sorry.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
Apr 26 '06 #49

P: n/a
Steve,

All I mean is that while Windows 95 and previous boot DOS first before
Windows, the OS was really DOS. That's the same with Linux and BSD.
You have a text based, low level OS, which isn't preconfigured to be
"high level" like WinXP. In fact, if you wanted, you can install just
the linux kernel, and whatever libraries you need to do your work, and
that's it. It's a tiny footprint in that case. The good thing is,
that linux kernel is still supported, updated, and more secure than any
close of anything. There are known and current C++ compiliers for
linux kernel. In fact, if you got ambitious enough about it, you could
create your own Linux distro just for troubleshooting hardware issues.
I don't think the learning curve is too hard, since you're familiar
with DOS, linux shell isn't too much different.

However, I just realized that I'm replying to a DOS newsgroup as well
as a c++ one.

I can't offer advice for DOS users, so good luck :).
Steve wrote:
Benry wrote:
Have you heard of Linux? Solaris? Um, MIPS? Do you mean the only
Microsoft product that can give you accurate timing and access to low
level hardware? You can get access to low level hardware (registers,
buses, etc) with windows languages, like C and C++. I'm actually
confused, perhaps I misunderstood the lessons I learned over the last
ten years...Could you explain?

(yes, off topic, but I'm really confused).


My main focus is hardware, servicing PC computers. Target for these programs
is PC with some kind of problem. It can be corrupted OS or hardware. First,
you must find which of those two is it, ideally with some other software,
not your original OS. That is the first place you need hardware test
independent of OS. Than, if it is hardware, what is wrong with it and where?
OS usually can't boot, and if it can, you have to boot it vast number of
times while you are replacing suspect components. If you do it with some
general purpose OS, that is extremely long and unpleasant experience. OS is
unstable, configuration is constantly changing, you have number of
meaningless errors, and you are never sure at which point you have finally
corrupted your OS (registry or configuration data). That means that at some
point you may have two or more independent errors at the same time, which
destroys replacement procedure logic, and you may never know when you have
replaced bad component. Moral of the story is that any high level OS is
almost completely useless.

So, I do not need Microsoft. I can use free DOS clone :) Actually, I be
happier to write programs at the bios level, or even to avoid bios also
(except boot). But that is overkill, and I loose some useful software,
hardware drivers and test flexibility. DOS level of complexity, small size
(bad memory locations case!), and nonmultitasking design is ideal.
That was target platform OS considerations.

If we talk about development platform OS, that depend on what OS you know
and have at home and various places where you work, or will work in the
future. At all those places I have only Windows. Because of that, I know it
(and must know it) well, and I know much less about Linux. Working in two
different sets of high-level software environments, and some low level,
would be too much. Windows is most common OS and that basically determines
everything else. It is not the question of OS performance or my preference.

Steve


Apr 26 '06 #50

55 Replies

This discussion thread is closed

Replies have been disabled for this discussion.