By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,531 Members | 2,219 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,531 IT Pros & Developers. It's quick & easy.

C 99 compiler access

P: n/a
I have access to a wide variety of different platforms here at JPL
and they all have pretty good C 99 compilers.

Some people claim that they have not moved to the new standard
because of the lack of C 99 compliant compilers.
Is this just a lame excuse for back-sliding?
Nov 14 '05
Share this Question
Share on Google+
233 Replies


P: n/a
Brian Inglis wrote:
E. Robert Tisdale wrote:
Embedded [C] programmers represent
a tiny fraction of all C programmers.


Where do you get your statistics
on the number of embedded and non-embedded C programmers?
I don't know either way, but am willing to accept that
the number of C programmers working on embedded devices
may now be larger than the number working on OSes, tools, DBMSes,
*[iu]x projects in C nowadays, as the commercial world
seems to have switched from C to newer languages and tools.


Nov 14 '05 #201

P: n/a
Reading this thread, I was kind of curious as to the "state of play" of the
major languages - a web search revealed quite a useful site ...

http://www.tiobe.com/tpci.htm

.... which shows that there hasn't been a *huge* change in quite a while.
Although I was kind of surprised to see the fairly recent dip in Java.

-Pete.
--
+
| http://home.comcast.net/~pete.gray/
-
"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote in message
news:ch**********@nntp1.jpl.nasa.gov...
Brian Inglis wrote:
E. Robert Tisdale wrote:
Embedded [C] programmers represent
a tiny fraction of all C programmers.


Where do you get your statistics
on the number of embedded and non-embedded C programmers?
I don't know either way, but am willing to accept that
the number of C programmers working on embedded devices
may now be larger than the number working on OSes, tools, DBMSes,
*[iu]x projects in C nowadays, as the commercial world
seems to have switched from C to newer languages and tools.

Nov 14 '05 #202

P: n/a
David A. Holland wrote:
If you want to really turn overcommit off, you have to reserve swap
not just for malloc but any time you do a virtual memory operation
that could lead to copying a page later: forking, for instance, or
mapping a file into memory. Since most Unix systems nowadays have
shared libraries that are loaded with mmap, and the libraries continue
to bloat out, you can end up with a *lot* of pointlessly reserved swap
space even by current standards.


You don't need multiple copies of read-only (I or D)
segments. And the reason the rest of a process' space
is R/W is that it will most likely be needed in the
course of performing the algorithm. The only
inherently *dynamic* RAM is the stack (which in a
properly designed app should be bounded by a reasonable
size) and the heap. The main purpose of the heap is
specifically to share the limited RAM resource among
competing processes. It is important to program
reliability that each process be able to sense when a
resource shortage occurs during execution and to retain
control when that happens; the recovery strategy needs
to be specific to the application, but typically would
involve backing out a partially completed transaction,
posting an error notification, scheduling a retry, etc.

Ideally stack overflow would throw an exception, but
anyway malloc is relied on to return a null pointer to
indicate heap resource depletion. For a process to be
abnormally terminated in the middle of a data operation
instead of being given control over when and how to
respond to the condition is unacceptable. If, as you
claim, there is now a consensus among OS designers that
it is okay for systems to behave that way, then that
indicates a serious problem with the OS designers.

It also indicates that Linux must *not* be used in any
critical application, at least not without great pains
being taken to determine the maximaum aggregate RAM
utilization and providing enough RAM+swap space to
accommodate it, and one still would be taking a chance.

I suppose this is what comes of letting amateurs design
and construct operating systems, and/or write the apps.
There was a time when software engineers were in charge.

Nov 14 '05 #203

P: n/a
Chris Hills <ch***@phaedsys.org> wrote in message news:<k+**************@phaedsys.demon.co.uk>...
....
What C programming isn't embedded. By which I mean pure C as opposed to
the pseudo C people do with C++ compilers.


People who work too long in one type of environment have a tendency to
forget about the other environments. That doesn't mean that they don't
exist. I work on a project which processes the data coming down from
the MODIS instruments on the Terra and Aqua satellites. We've got
about 88 "Process groups" working on this data, each of which consist
of one or more programs, and the bulk of that code is written in C,
the rest is mostly either Fortran 77 or Fortran 90. It can't be
"pseudo C" beecause we're not allowed to deliver code that needs a C++
compiler to build it.

By the way - if code is written in the common subset of C and C++, in
what sense is it "pseudo C"?
Nov 14 '05 #204

P: n/a
Douglas A. Gwyn wrote:
I suppose this is what comes of letting amateurs design
and construct operating systems, and/or write the apps.


You mean unlike the high-quality products of professionals
like Microsoft?

Nobody "let" anyone design and construct either Unix or Windows stuff.
Neither were developed in dictatorships with mandatory quality control
and laws prohibiting both sale and purchase of products without an
official stamp of quality.

--
Hallvard
Nov 14 '05 #205

P: n/a
"Douglas A. Gwyn" wrote:

David A. Holland wrote:
If you want to really turn overcommit off, you have to reserve
swap not just for malloc but any time you do a virtual memory
operation that could lead to copying a page later: forking,
for instance, or mapping a file into memory. Since most Unix
systems nowadays have shared libraries that are loaded with
mmap, and the libraries continue to bloat out, you can end
up with a *lot* of pointlessly reserved swap space even by
current standards.
You don't need multiple copies of read-only (I or D)
segments. And the reason the rest of a process' space
is R/W is that it will most likely be needed in the
course of performing the algorithm.


Not necessarily, it could be data that is initialised
during the first few seconds of running and then remains
largely unchanged and could be shared in systems that
implement copy-on-write sharing when a process forks.
The only inherently *dynamic* RAM is the stack (which in a
properly designed app should be bounded by a reasonable
size) and the heap. The main purpose of the heap is
specifically to share the limited RAM resource among
competing processes. It is important to program
reliability that each process be able to sense when a
resource shortage occurs during execution and to retain
control when that happens; the recovery strategy needs
to be specific to the application, but typically would
involve backing out a partially completed transaction,
posting an error notification, scheduling a retry, etc.
But what others are saying is that many applications
are allocating much, much more memory than they need, "just
in case", and that lazy allocation by the OS results in
a large improvement in performance.

Ideally stack overflow would throw an exception, but
anyway malloc is relied on to return a null pointer to
indicate heap resource depletion. For a process to be
abnormally terminated in the middle of a data operation
instead of being given control over when and how to
respond to the condition is unacceptable.
This depends on the purpose. For some uses, a large
improvement in performance may be worth the dangers of lazy
allocation. Also, my (possibly faulty) recollection is that
the memory failure results in a signal, which can in theory
be caught and handled - not that I would want to do it.
If, as you claim, there is now a consensus among OS designers
that it is okay for systems to behave that way, then that
indicates a serious problem with the OS designers.
What David actually said was that the consensus
was that it was "unreasonably expensive", ie that the
performance degradation was too large to justify the improvement
in safety. Of course, this is a value judgement, and
different individuals will come to different conclusions,
and the performance/safety trade off will be different
in a nuclear reactor controller, a critical database server,
and a user's desk top PC, to name a few extreme cases.

It also indicates that Linux must *not* be used in any
critical application, at least not without great pains
being taken to determine the maximaum aggregate RAM
utilization and providing enough RAM+swap space to
accommodate it, and one still would be taking a chance.
Yes, you would have to ensure that the RAM+SWAP was
large enough for the worst case, but ensuring this should be
adequate. With overcommitment turned off, it may be the
case that you would have to supply so much RAM+swap that
the system became too expensive.

I suppose this is what comes of letting amateurs design
and construct operating systems, and/or write the apps.
There was a time when software engineers were in charge.


I suspect there may be some truth to the claim about
amateurs designing applications, as modern applications seem
to regard RAM as infinite. However I think you are a little
unfair concerning OS designers. There seems to have been
a lot of debate and serious consideration of the performance
and safety trade offs. The fact that some OS designers have
made decisions on these trade offs that you disagree with
does not, in my opinion, mean that they are "amateurs".

Of course the best is to have both options. If
you need the safety, turn overcommitment off and pay the
price (either in money for more RAM or swap space or in
performance or both). If you do not need the safety,
allow overcommitment and get better performance for the
same cash. If you need safety, but cannot afford the price
of turning overcommitment off, either carefully analyse your
memory usage to ensure you are "reasonably" safe, or get
more money, or lower your target safety level, or lower
your performance standards, or compromise in some other
way.
Charles
Nov 14 '05 #206

P: n/a
Charles Sanders <C.******************@BoM.GOV.AU> wrote:
"Douglas A. Gwyn" wrote:
Ideally stack overflow would throw an exception, but
anyway malloc is relied on to return a null pointer to
indicate heap resource depletion. For a process to be
abnormally terminated in the middle of a data operation
instead of being given control over when and how to
respond to the condition is unacceptable.
I agree with that completely: in my opinion, it is just plain unacceptable
for malloc() to lie. When I write code, I always take into account the
possibility of malloc() failure and implement some kind of recovery
strategy. If malloc() lies and relies on the execution environment just
arbitrarily killing the program off if the memory isn't really there is no
use to me whatsoever.

Of course, much of the time all malloc() is at the mercy of the environment
- all it can do is request the extra memory and assume that if it is told
that the allocation was successful, then the memory is there.

The only possible workaround I can think of is to always memset the data to
a non-zero value (you wouldn't want malloc() using some memory-mapping trick
to map several pages of zero data to a single page of zero bytes, after all
would you!)

I don't see why I should have to impede the performance of my applications
simply to work around the environment lying to me/malloc() and then killing
me off when I use the information that I accepted in good faith!

This depends on the purpose. For some uses, a large
improvement in performance may be worth the dangers of lazy
allocation. Also, my (possibly faulty) recollection is that
the memory failure results in a signal, which can in theory
be caught and handled - not that I would want to do it.


Indeed not, given the restrictions on what signal handlers can and cannot
do. The thing is, is that IF malloc tells me that there's a problem, then I
can do something about it as the program can only fail in a handful of
places and I can deal with those situations. The current situation is that
programs can fail at any point where a dynamically allocated object is
accessed due to a memory exhaustion problem.

If, as you claim, there is now a consensus among OS designers
that it is okay for systems to behave that way, then that
indicates a serious problem with the OS designers.


What David actually said was that the consensus
was that it was "unreasonably expensive", ie that the
performance degradation was too large to justify the improvement
in safety. Of course, this is a value judgement, and
different individuals will come to different conclusions,
and the performance/safety trade off will be different
in a nuclear reactor controller, a critical database server,
and a user's desk top PC, to name a few extreme cases.

It also indicates that Linux must *not* be used in any
critical application, at least not without great pains
being taken to determine the maximaum aggregate RAM
utilization and providing enough RAM+swap space to
accommodate it, and one still would be taking a chance.


Yes, you would have to ensure that the RAM+SWAP was
large enough for the worst case, but ensuring this should be
adequate. With overcommitment turned off, it may be the
case that you would have to supply so much RAM+swap that
the system became too expensive.


That is true. After spending many months trying to build a reliable digital
TV set top box using just the Linux kernel, busybox and our custom
applications, I would not care to repeat the experience. The overcommit
functionality appears to be an all-or-nothing approach - that is, either we
ended up with no page sharing *at all*, which obviously vastly inflates the
total amount of RAM required (inflated it way beyond the amount of RAM in
the device, in fact), or we had to put up with the box just crashing
"randomly" from time to time because the applications were not expecting to
be just terminated by the kernel at arbitrary times. There was no mass
storage device except for a read-only (for day-to-day running purposes)
EEPROM device - therefore no swap at all.
--
Stewart Brodie
Nov 14 '05 #207

P: n/a
In article <ch**********@nntp1.jpl.nasa.gov>, E. Robert Tisdale
<E.**************@jpl.nasa.gov> writes
Chris Hills wrote:
What C programming isn't embedded?
By which I mean pure C
as opposed to the pseudo C people do with C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.


Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.


based on what?

There are many thousands of embedded items with C in them. Where are all
these non embedded C programs?

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Nov 14 '05 #208

P: n/a
In article <rj********************************@4ax.com>, Brian Inglis
<Br**********@SystematicSW.Invalid> writes
On Thu, 09 Sep 2004 13:54:31 -0700 in comp.std.c, "E. Robert Tisdale"
<E.**************@jpl.nasa.gov> wrote:
Chris Hills wrote:
What C programming isn't embedded?
By which I mean pure C
as opposed to the pseudo C people do with C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.


Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.


Where do you get your statistics on the number of embedded and
non-embedded C programmers?
I don't know either way, but am willing to accept that the number of C
programmers working on embedded devices may now be larger than the
number working on OSes, tools, DBMSes, *[iu]x projects in C nowadays,
as the commercial world seems to have switched from C to newer
languages and tools.


I happen to work where I have visibility of the tools people. They work
in C++ mostly. The small OS's are C

What I can't see is where the non-embedded C programs would be.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Nov 14 '05 #209

P: n/a
On Thu, 09 Sep 2004 18:37:11 -0400, Joe Wright
<jo********@comcast.net> wrote:
E. Robert Tisdale wrote:

[...]
Embedded [C] programmers represent
a tiny fraction of all C programmers.


I was embedded last night. And again tonight, I hope. :-)


But were you using C?

Regards,

-=Dave
--
Change is inevitable, progress is not.
Nov 14 '05 #210

P: n/a
Charles Sanders wrote:
But what others are saying is that many applications
are allocating much, much more memory than they need, "just
in case", and that lazy allocation by the OS results in
a large improvement in performance.
Poor application design is no excuse for poor OS design.
Even well-designed apps are being made to suffer erratic
behavior without being given any sensible way to cope
with it.
the memory failure results in a signal, which can in theory
be caught and handled - not that I would want to do it.
But the point at which that occurs during program
execution cannot be reliably controlled by the program.
For example, it might occur in the middle of an
incomplete data structure modification.
and the performance/safety trade off will be different
in a nuclear reactor controller, a critical database server,
and a user's desk top PC, to name a few extreme cases.
Actually you have no way of accurately predicting the
consequences of a program malfunctioning on a desktop
PC. There is NO EXCUSE for INTENTIONALLY not providing
the "end user" with reliable facilities.
... The fact that some OS designers have
made decisions on these trade offs that you disagree with
does not, in my opinion, mean that they are "amateurs".


Memory objects are really no different from file objects
in this regard. What would you think if OS designers
applied the same "logic" to opening a file? The open
would just stash the name and always report success.
Then when the program proceeds to try to use the file
data, oops, ABEND. How in the world would we write
reasonable programs that have to operate in such a
hostile environment? And why should we??

The general principle *has* to be, when a program
requests access to some controlled resource, it needs
to be told *before proceeding* if the resource is
available. In the case of VM, that is traditionally
done by using a "working-set" model, which has the
further advantages that (a) if an attempt is made to
add another execssively large program to the mix, that
attempt will *immediately* be flagged as a problem, at
an appropriate place to implement a resaonable recovery
policy; (b) if a process gets too big during execution,
it does not terminate *other*, well-behaved processes.
Nov 14 '05 #211

P: n/a

[dropped c.s.c xpost]

On Fri, 10 Sep 2004, Richard Kettlewell wrote:

Chris Hills <ch***@phaedsys.org> writes:
What C programming isn't embedded?
By which I mean pure C as opposed to the pseudo C people do with
C++ compilers.
[...] There are many thousands of embedded items with C in them. Where are
all these non embedded C programs?


There are about 14,000 packages available in the OS I use. Some of
those packages won't contain programs (but rather a lot of them
contain multiple programs); some of them aren't written in C, but my
experience is that (when I look at the source) the majority are. That
should account for at least several thousand C programs.

I think you have to miss rather a lot to ask "what C programming isn't
embedded": for all I know it might be a small minority but it's
definitely there.


Not even a small minority. /Maybe/ a large minority. I think
the truth is that while "99% of all C programs are embedded" is
false, it /is/ the case that a programmer could go his whole life
and yet 99% of the C programs he would deal with would be embedded.
Chris is apparently one of those programmers.

If you use *nix, then I would expect a good 50% of your utility
programs would be written in C. Things like 'cat' and 'grep'
and 'lex' and 'yacc' and 'gcc' and so on.

HTH,
-Arthur
Nov 14 '05 #212

P: n/a
%% Stewart Brodie <st************@ntlworld.com> writes:

sb> The only possible workaround I can think of is to always memset
sb> the data to a non-zero value

Actually, all you have to do is write one byte to each page. That's
enough to force the system to give you all the memory you asked for, and
that's not much of an overhead.

--
-------------------------------------------------------------------------------
Paul D. Smith <ps****@nortelnetworks.com> HASMAT--HA Software Mthds & Tools
"Please remain calm...I may be mad, but I am a professional." --Mad Scientist
-------------------------------------------------------------------------------
These are my opinions---Nortel Networks takes no responsibility for them.
Nov 14 '05 #213

P: n/a
Chris Hills wrote:
There are many thousands of embedded items with C in them. Where are all
these non embedded C programs?


In millions of application programs.
Nov 14 '05 #214

P: n/a
On 2004-09-10 17:45, Paul D. Smith wrote:
%% Stewart Brodie <st************@ntlworld.com> writes:

sb> The only possible workaround I can think of is to always memset
sb> the data to a non-zero value

Actually, all you have to do is write one byte to each page. That's
enough to force the system to give you all the memory you asked for, and
that's not much of an overhead.


How does the program know the page size?

-- Niklas Matthies
Nov 14 '05 #215

P: n/a
Niklas Matthies <us***********@nmhq.net> writes:
On 2004-09-10 17:45, Paul D. Smith wrote:
%% Stewart Brodie <st************@ntlworld.com> writes:

sb> The only possible workaround I can think of is to always memset
sb> the data to a non-zero value

Actually, all you have to do is write one byte to each page. That's
enough to force the system to give you all the memory you asked for, and
that's not much of an overhead.


How does the program know the page size?

On POSIX systems one can use getpagesize() or sysconf(_SC_PAGE_SIZE) or
sysconf(_SC_PAGESIZE). No idea about non-POSIX systems (or if that concept
even make sense on all of them).

Or was that a rhetoric question?

Dragan

--
Dragan Cvetkovic,

To be or not to be is true. G. Boole No it isn't. L. E. J. Brouwer

!!! Sender/From address is bogus. Use reply-to one !!!
Nov 14 '05 #216

P: n/a
On 2004-09-10 18:26, Dragan Cvetkovic wrote:
Niklas Matthies <us***********@nmhq.net> writes:
On 2004-09-10 17:45, Paul D. Smith wrote:
%% Stewart Brodie <st************@ntlworld.com> writes:

sb> The only possible workaround I can think of is to always memset
sb> the data to a non-zero value

Actually, all you have to do is write one byte to each page.
That's enough to force the system to give you all the memory you
asked for, and that's not much of an overhead.


How does the program know the page size?


On POSIX systems one can use getpagesize() or sysconf(_SC_PAGE_SIZE)
or sysconf(_SC_PAGESIZE). No idea about non-POSIX systems (or if
that concept even make sense on all of them).

Or was that a rhetoric question?


I'm not sure. :) I guess I wanted to point out that "all you have to
do is ..." introduces a portability problem. And just for reliably
allocating some piece of memory.

-- Niklas Matthies
Nov 14 '05 #217

P: n/a
Pete Gray wrote:
Reading this thread, I was kind of curious as to the "state of play" of
the major languages - a web search revealed quite a useful site ...

http://www.tiobe.com/tpci.htm

... which shows that there hasn't been a *huge* change in quite a while.
Although I was kind of surprised to see the fairly recent dip in Java.


Considering some of the other oddball languages they list I am surprised
they hadn't listed Forth (which on the same basis of their published
calculation method came above Prolog).

--
************************************************** ******************
Paul E. Bennett ....................<email://peb@a...>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE......
Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details.
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
************************************************** ******************
Nov 14 '05 #218

P: n/a
On Fri, 10 Sep 2004 20:40:38 +0100, "Paul E. Bennett"
<pe*@amleth.demon.co.uk> wrote:
Pete Gray wrote:
Reading this thread, I was kind of curious as to the "state of play" of
the major languages - a web search revealed quite a useful site ...

http://www.tiobe.com/tpci.htm

... which shows that there hasn't been a *huge* change in quite a while.
Although I was kind of surprised to see the fairly recent dip in Java.


Considering some of the other oddball languages they list I am surprised
they hadn't listed Forth (which on the same basis of their published
calculation method came above Prolog).


They do. Scroll down a bit. It's number 26 at 0.130%. Which puts it
ahead of Ruby, Tcl/Tk, REXX, SmallTalk and Objective-C, but below
Postscript, RPG, Scheme, and AWK.

Prolog is number 18 at 0.259%.

Regards,

-=Dave
--
Change is inevitable, progress is not.
Nov 14 '05 #219

P: n/a
%% Niklas Matthies <us***********@nmhq.net> writes:

nm> I'm not sure. :) I guess I wanted to point out that "all you have
nm> to do is ..." introduces a portability problem. And just for
nm> reliably allocating some piece of memory.

Well, malloc() can't be implemented in portable C anyway of course.

Yes, if you wanted to write a 100% portable, 100% foolproof wrapper for
malloc() you wouldn't be able to make use of this capability. You can
write a reliable POSIX-portable version, though, and/or you can make a
very conservative assumption and write every 256th byte and call it
good. Heck, even if you are ridiculously conservative and write every
5th byte that's still only 20% of the effort it would take to write all
of them.

--
-------------------------------------------------------------------------------
Paul D. Smith <ps****@nortelnetworks.com> HASMAT--HA Software Mthds & Tools
"Please remain calm...I may be mad, but I am a professional." --Mad Scientist
-------------------------------------------------------------------------------
These are my opinions---Nortel Networks takes no responsibility for them.
Nov 14 '05 #220

P: n/a
In article <vp**************@lemming.engeast.baynetworks.com> ,
"Paul D. Smith" <ps****@nortelnetworks.com> wrote:
%% Niklas Matthies <us***********@nmhq.net> writes:

nm> I'm not sure. :) I guess I wanted to point out that "all you have
nm> to do is ..." introduces a portability problem. And just for
nm> reliably allocating some piece of memory.

Well, malloc() can't be implemented in portable C anyway of course.

Yes, if you wanted to write a 100% portable, 100% foolproof wrapper for
malloc() you wouldn't be able to make use of this capability. You can
write a reliable POSIX-portable version, though, and/or you can make a
very conservative assumption and write every 256th byte and call it
good. Heck, even if you are ridiculously conservative and write every
5th byte that's still only 20% of the effort it would take to write all
of them.


Not on a modern CPU, where the number of cache lines counts, that your
code touches.
Nov 14 '05 #221

P: n/a
"Paul D. Smith" <ps****@nortelnetworks.com> wrote:
%% Niklas Matthies <us***********@nmhq.net> writes:

nm> I'm not sure. :) I guess I wanted to point out that "all you have
nm> to do is ..." introduces a portability problem. And just for
nm> reliably allocating some piece of memory.

Well, malloc() can't be implemented in portable C anyway of course.
We're not talking about implementing malloc in portable C - it's part of the
standard library. It's reasonable to assume that the standard library will
be implemented to integrate (efficiently) with the execution environment
which will almost certainly mean an environment-specific implementation. The
concern is that library implementations are not implementing the required
semantics (although I also accept that the C library implementations may in
turn be being stymied by the environment to some extent).

Yes, if you wanted to write a 100% portable, 100% foolproof wrapper for
malloc() you wouldn't be able to make use of this capability. You can
write a reliable POSIX-portable version, though, and/or you can make a
very conservative assumption and write every 256th byte and call it good.
"Reliable on POSIX only", "very conservative". Neither of these equates to
"portable" as far as I'm concerned.

What other standard library routines would you not mind getting lies back
from? Douglas A Gwyn mentioned the file access functions in another
article. What if fopen() returned a valid non-NULL handle but then just
caused a crash when you tried to read data from it. In fact, it is quite
possible that this behaviour can occur if the C library's stdio buffering is
using malloc to allocate the buffer memory. You open the file, get a valid
handle, you're not at the end of the file, the file is buffered, you call
getc() and a crash occurs (in __filbuf or whatever the buffer filling
routine is called) because the buffer was "successfully" allocated yet the
memory wasn't actually available. If it had been told that memory wasn't
available in the first place, __filbuf (or whatever) could have disabled
buffering on the FILE stream or the fopen() could have been failed in the
first place or something.

Heck, even if you are ridiculously conservative and write every 5th byte
that's still only 20% of the effort it would take to write all of them


To be honest, the chances are that memset would be more efficient than that
on many systems (although devious implementations could end up doing
something sneaky with page mapping and copy-on-write that defeats it). But
we're talking about having to take silly, timewasting and
implementation-specific measures just to ensure that the library is
implementing the required semantics.
--
Stewart Brodie
Nov 14 '05 #222

P: n/a
%% Stewart Brodie <st************@ntlworld.com> writes:

sb> We're not talking about implementing malloc in portable C - it's
sb> part of the standard library.

What I'm saying is that if you WANT this behavior you can get it, even
on operating systems which use lazy allocation, by reworking the
standard library somewhat to get a malloc() that behaves as you like.
If you do that then of course your reworking does not need to be
portable.

And, you can even get it for your critical applications while leaving
other, less fortunate applications to muddle along with their unreliable
lazy malloc().
Yes, if you wanted to write a 100% portable, 100% foolproof wrapper
for malloc() you wouldn't be able to make use of this capability.
You can write a reliable POSIX-portable version, though, and/or you
can make a very conservative assumption and write every 256th byte
and call it good.


sb> "Reliable on POSIX only", "very conservative". Neither of these
sb> equates to "portable" as far as I'm concerned.

Yes, portability often struggles with practicality and reality.
Everyone draws these lines differently. I don't view "portable" as a
boolean value, although I grant that some (esp. those on comp.std.c)
do.

sb> But we're talking about having to take silly, timewasting and
sb> implementation-specific measures just to ensure that the library
sb> is implementing the required semantics.

Yes, that's what we're talking about. I suppose you'll grant that the
many systems that are designed this way aren't done just to force you to
take silly, timewasting and implementation-specific measures. The
implementation issues surrounding virtual memory are quite complex and
the arguments for and against lazy allocation make for some fascinating
reading.

--
-------------------------------------------------------------------------------
Paul D. Smith <ps****@nortelnetworks.com> HASMAT--HA Software Mthds & Tools
"Please remain calm...I may be mad, but I am a professional." --Mad Scientist
-------------------------------------------------------------------------------
These are my opinions---Nortel Networks takes no responsibility for them.
Nov 14 '05 #223

P: n/a
Chris Hills <ch***@phaedsys.org> wrote in message news:<5I**************@phaedsys.demon.co.uk>...
....
I happen to work where I have visibility of the tools people. They work
in C++ mostly. The small OS's are C

What I can't see is where the non-embedded C programs would be.


Do you have reason to believer that your 'seeing' is comprehensive? I
have no idea how much non-embedded C code there is out there. If were
to make the mistake of extrapolating from my own personal experience,
I would conclude that most of the programs in the world are
non-embedded C programs written for batch processing in a strict "C"
locale. Of course, I know that isn't true.
Nov 14 '05 #224

P: n/a
In article <ww*************@rjk.greenend.org.uk>, Richard Kettlewell
<in*****@invalid.invalid> writes
Chris Hills <ch***@phaedsys.org> writes:
E. Robert Tisdale <E.**************@jpl.nasa.gov> writes
Chris Hills wrote: What C programming isn't embedded?
By which I mean pure C as opposed to the pseudo C people do with
C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.

Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.


based on what?

There are many thousands of embedded items with C in them. Where are
all these non embedded C programs?


There are about 14,000 packages available in the OS I use. Some of
those packages won't contain programs (but rather a lot of them
contain multiple programs); some of them aren't written in C, but my
experience is that (when I look at the source) the majority are. That
should account for at least several thousand C programs.

I think you have to miss rather a lot to ask "what C programming isn't
embedded": for all I know it might be a small minority but it's
definitely there.

I agree. My point was that virtually anything with electric power (or
batteries) has an MCU these days and 80% of them are programmed in C
(the rest is assembler with about 3% being C++).

Embedded systems are not small with a few lines of code. These days car
radios have about 1,000,000 lines of code in them. (It came as a
surprise to me too), Telecoms systems are vast, medical equipment,
controlsystems, (lifts, production lines robitocs) etc it goes on and
on.

The use of embedded systems (and C) is expanding all the time.

However on desktops and general non-embedded computing C has tended to
be replaced by C++, c# and JAva.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Nov 14 '05 #225

P: n/a
Paul E. Bennett wrote:
Considering some of the other oddball languages they list I am surprised
they hadn't listed Forth (which on the same basis of their published
calculation method came above Prolog).


Whatever happened to Modula-2?

Paul Burke

Nov 14 '05 #226

P: n/a
In article <8b**************************@posting.google.com >, James
Kuyper <ku****@wizard.net> writes
Chris Hills <ch***@phaedsys.org> wrote in message news:<k+cb8dCy4LQBFA5O@phaedsy
s.demon.co.uk>...
...
What C programming isn't embedded. By which I mean pure C as opposed to
the pseudo C people do with C++ compilers.


People who work too long in one type of environment have a tendency to
forget about the other environments. That doesn't mean that they don't
exist. I work on a project which processes the data coming down from
the MODIS instruments on the Terra and Aqua satellites. We've got
about 88 "Process groups" working on this data, each of which consist
of one or more programs, and the bulk of that code is written in C,
the rest is mostly either Fortran 77 or Fortran 90. It can't be
"pseudo C" beecause we're not allowed to deliver code that needs a C++
compiler to build it.

By the way - if code is written in the common subset of C and C++, in
what sense is it "pseudo C"?


That is fine... as long as you are using all the parts of C++ that are
exactly the same as C. Some parts that are common to both AFAIK behave
differently. I forget which exactly as I have not used C++ for some
while.

A lot of people use c/C++ which is C++ that looks like C but would
behave differenty (if it compiled at all) in a pure C compiler.

I have seen a lot of people who used C/C++ what they used "almost C" in
a C++ compiler.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Nov 14 '05 #227

P: n/a
Dave Hansen wrote:
On Fri, 10 Sep 2004 20:40:38 +0100, "Paul E. Bennett"
<pe*@amleth.demon.co.uk> wrote:
Pete Gray wrote:
Reading this thread, I was kind of curious as to the "state of play" of
the major languages - a web search revealed quite a useful site ...

http://www.tiobe.com/tpci.htm

... which shows that there hasn't been a *huge* change in quite a while.
Although I was kind of surprised to see the fairly recent dip in Java.


Considering some of the other oddball languages they list I am surprised
they hadn't listed Forth (which on the same basis of their published
calculation method came above Prolog).


They do. Scroll down a bit. It's number 26 at 0.130%. Which puts it
ahead of Ruby, Tcl/Tk, REXX, SmallTalk and Objective-C, but below
Postscript, RPG, Scheme, and AWK.

Prolog is number 18 at 0.259%.


Amazing how I missed seeing it. I posted to the company as well folowing my
own search with the search parameters as they described and Forth came in
above Prolog (strange).

As they declare that they ignore values that are more than twice the
previous months figures then I am sure the results will occassionally be
less than those you might obtain on a trended plot (which takes into
account at least some of the peak). Might be interesting to take ones own
month by month census. I suppose.

--
************************************************** ******************
Paul E. Bennett ....................<email://peb@a...>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE......
Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details.
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
************************************************** ******************
Nov 14 '05 #228

P: n/a
Chris Hills <ch***@phaedsys.org> wrote in message news:<BS**************@phaedsys.demon.co.uk>...
In article <8b**************************@posting.google.com >, James
Kuyper <ku****@wizard.net> writes ....
By the way - if code is written in the common subset of C and C++, in
what sense is it "pseudo C"?


That is fine... as long as you are using all the parts of C++ that are
exactly the same as C. Some parts that are common to both AFAIK behave
differently. I forget which exactly as I have not used C++ for some
while.


With a bit of cleverness or bad luck you could write code which is
legal in both languages, but with a different meaning in each.
However, after reviewing the differences (see Annex C of the C++
standard), I think it would be pretty difficult to do so by accident.
A lot of people use c/C++ which is C++ that looks like C but would
behave differenty (if it compiled at all) in a pure C compiler.


I can't imagine why; if you're going to bother writing code that can
only be compiled correctly with C++, why wouldn't you take advantage
of the many neat features that C++ has that are incompatible with C? I
think it's much more common to write C++ that uses only a few minor
features of C++, rather than taking full advantage of the power of the
language. Such code won't compile as C code.
Nov 14 '05 #229

P: n/a
In comp.std.c Chris Hills <ch***@phaedsys.org> wrote:

However on desktops and general non-embedded computing C has tended to
be replaced by C++, c# and JAva.


Not really. For new application programming, maybe (although I'd bet
that Visual Basic is far more popular in that arena than any of the
C-ish languages), but there is a *lot* of existing C code that's being
maintained and enhanced and C is still the language of choice by far for
system programming.

-Larry Jones

Hello, local Navy recruitment office? Yes, this is an emergency... -- Calvin
Nov 14 '05 #230

P: n/a
Paul Burke <pa**@scazon.com> wrote:

Whatever happened to Modula-2?


Being that Modula-2 combines the expressive power of Pascal with the
safety and security of C, do you really need to ask? ;-)

-Larry Jones

I don't think math is a science, I think it's a religion. -- Calvin
Nov 14 '05 #231

P: n/a
In article <8b************************@posting.google.com>, James Kuyper
<ku****@wizard.net> writes
Chris Hills <ch***@phaedsys.org> wrote in message news:<5IA8x2AAPdQBFA5S@phaedsy
s.demon.co.uk>...
...
I happen to work where I have visibility of the tools people. They work
in C++ mostly. The small OS's are C

What I can't see is where the non-embedded C programs would be.
Do you have reason to believer that your 'seeing' is comprehensive?


No but I see a lot of people involved in desktop and PC things . They
use C++ Java and C#.

Also if you look at the stats for job add that are in many computer/IT
magazines C has all but disappeared compared to C#, Jave VB, XML, Oracal
etc
I
have no idea how much non-embedded C code there is out there. If were
to make the mistake of extrapolating from my own personal experience,
I would conclude that most of the programs in the world are
non-embedded C programs written for batch processing in a strict "C"
locale. Of course, I know that isn't true.


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Nov 14 '05 #232

P: n/a
"Douglas A. Gwyn" <DA****@null.net> wrote:
Microsoft delivered a presentation using examples that
they have already implemented as part of their extended
C library. See
http://www.open-std.org/jtc1/sc22/wg.../docs/n997.pdf
for an associated official document.
Some people have claimed that this is a proposal to lock
the C standard into a proprietary vendor product, but
that is patently false, as can be seen by reading the
proposal.


OTOH, that document is not entirely free from gratuitous non-ISOisms
(what's a __cdecl when it's at home, and where _is_ that home?). If they
can pull off a document that neither uses non-ISO constructs as a
starting point nor tries to introduce obvious M$-specific features in
the new suggestions, I'd like to see them do so, but I'm not holding my
breath.

Richard
Nov 14 '05 #233

P: n/a
Richard Bos wrote:
OTOH, that document is not entirely free from gratuitous non-ISOisms
(what's a __cdecl when it's at home, and where _is_ that home?). If they
can pull off a document that neither uses non-ISO constructs as a
starting point nor tries to introduce obvious M$-specific features in
the new suggestions, I'd like to see them do so, but I'm not holding my
breath.


? You don't have to worry about WG14 using __cdecl in any
of our specs.
Anyway, our response seems largely to be to encourage the
development of Randy Meyers' proposal, which you should read.
Nov 14 '05 #234

233 Replies

This discussion thread is closed

Replies have been disabled for this discussion.