By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
434,720 Members | 2,135 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 434,720 IT Pros & Developers. It's quick & easy.

C 99 compiler access

P: n/a
I have access to a wide variety of different platforms here at JPL
and they all have pretty good C 99 compilers.

Some people claim that they have not moved to the new standard
because of the lack of C 99 compliant compilers.
Is this just a lame excuse for back-sliding?
Nov 14 '05
Share this Question
Share on Google+
233 Replies


P: n/a
Douglas A. Gwyn wrote:
David Hopwood wrote:
Not true. Because auto variables are constant size and recursion can be
limited, it may be the case on a particular platform that a given program
will *never* have undefined behaviour due to a stack overflow. This is
very unlikely to be true if the program uses VLAs, because the main point
of using VLAs is to allocate objects of arbitrary sizes.
Actually it is to allow *parametric* array sizes,
not *arbitrarily large* sizes.


I meant what I said. The *main point* of using VLAs is to allocate objects
of arbitrary sizes. If you have a fixed bound on the size that is small
enough, you don't need to use a VLA.
Even if the entire
app were coded using constant, largest supported
sizes for every array, in general you still wouldn't
know whether the stack will overflow at run time,
unless you do careful analysis (and happen to have
an algorithm that is not too dynamic).


How difficult it is to rigorously prove that a program does not cause a
stack overflow is not the issue here. For some purposes, testing at the
maximum recursion depth is sufficient. A program that uses arbitrary-size
VLAs *will* fail whenever it allocates a sufficiently large VLA. That in
itself is sufficient reason not to use them.

David Hopwood <da******************@blueyonder.co.uk>
Nov 14 '05 #151

P: n/a
Chris Hills <ch***@phaedsys.org> writes:
[...]
The standard is freely available from a multitude of sources.. The
recognised test suites are not. they cost a lot of money. However there
is a lot of work in them.


If the standard is freely available, I wasted $18. Are you referring
to the draft versions?

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #152

P: n/a
On Fri, 03 Sep 2004 16:43:01 +0000, CBFalconer wrote:
Dan Pop wrote:

... snip ...

And GNU C is the very last language you'd want a new C standard
to conflict with. Without these conflicts, gcc would have been a
conforming C99 translator by the time gcc 3.0 was released or
shortly afterwards and this would have created the motivation for
commercial implementors to support C99, too. End result: success
instead of failure.


This may be the wrong venue in which to ask, but how many
programmers really use the GNU extensions?


Apart from a very few examples of "GCC only" software, most software I've
seen uses extensions that can easily be switched off. For example things
like putting "__attribute__ ((__format__ (__printf__, x, y)))" on your
printf like functions, which you can then turn off if you aren't
compiling with GCC.

--
James Antill -- ja***@and.org
Need an efficient and powerful string library for C?
http://www.and.org/vstr/

Nov 14 '05 #153

P: n/a
Keith Thompson wrote:
Agreed. Even if you can't assume that a given implementation will
support, say, complex numbers, it's probably safe at this point to
assume that an implementation that does support them will do so in a
manner consistent with the C99 specification.
Yes, that is a good example. Similarly for the new
math functions now being considered for C++ as well
as C; it is much better for apps that need these to
use a *single* interface to invoke them than to have
be be adapted to each new platform that comes along.
I suspect the best way to improve the situation would be to devote the
necessary resources to make gcc support C99 at least as well as it
currently supports C90. If that happened, other vendors might feel
more pressure to be gcc-compatible than they now feel to be
C99-compatible.


That is a good point.

Nov 14 '05 #154

P: n/a
Scott Robert Ladd wrote:
It simply isn't going to happen. People developing "free" software are
content with GCC, which does a fairly good job of approaching C99
compliance -- in comparison to most commercial compilers, that is. If GCC
is to attain full C99 compliance, someone is going to have to fund it (or
find a lot of available graduate students?)


This reminds me of a complaint I have with "the system"
that we currently have, in which *nobody seems to have
"official" responsibility for seeing that things that
ought to be done are done*. I have noticed this in
many areas, for example:
- checking standards conformance
- archiving equipment technical documentation
- archiving system software
- documenting data formats
- overseeing national intelligence
- declassifying no-longer-sensitive information

Now of course there are *some* official systems that
are supposed to help with *some* of this, for example
*some* products, mainly consumer items, undergo *some*
official testing against safety standards, and the (US)
Freedom of Information Act is supposed to help with that
last bullet. But in actuality these things are done
minimally or not at all, even when it is in the evident
best commercial or political interests of the parties
who are in the best position to do the job.

Consequently, what little is done in such areas depends
on the effort of concerned volunteers, who often are not
in a very good position to to the job right (for example,
may not have access to a complete set of documentation).
I chose the above list since I have recently been
involved in them, but as I indicated it mostly has been
done without the encouragement or support of my employer.

My father used to blame that kind of shortsightedness on
management having been taught by the likes of the
Harvard Business School, and I can't say that he was
wrong about that..

Nov 14 '05 #155

P: n/a
Chris Hills wrote:
I am confused.. the USERS of the compiler were saying ...
Sorry, I misread it in context as saying that some
compiler vendors were encouraging that attitude.
In other words there are many users out there who think the standards
committee should be writing the standard to fit the compilers!
Yes, that kind of problem has existed "forever".
It is a matter of education, not of standards.
So how do you sell the importance of C99 to people like that and there
are a LOT of them.


Until they are educated about the value of
standards *and why as programmers they should
conform to them as much as possible*, they
aren't going to appreciate C99's standardizing
additional features that they have a good use
for.

Nov 14 '05 #156

P: n/a
On Sun, 5 Sep 2004 22:18:29 +0100 in comp.std.c, Chris Hills
<ch***@phaedsys.org> wrote:
In article <Vu********************@comcast.com>, Douglas A. Gwyn
<DA****@null.net> writes
Chris Hills wrote:
<DA****@null.net> writes
Chris Hills wrote:
>... most of the worlds major

Anyway, there are several compilers that have incorporated C99
features, on a path to full compliance.
"several" "features" this is not good enough. We need "most" and "full
compliance"


My point was that the 1999 standard is playing a role in the
evolution of C compilers. It is unrealistic to expect fully
conforming implementations on day 1 of the standard.


However we are now 4 years down the line. Not "day 1" As has been
pointed out one compiler is C99 ( The tasking one and a couple of
smaller players) so it is possible to do. However there seems to be no
commercial impetus to do it.

If C99 was really a requirement anywhere they would all be producing C99
compilers.

Somehow we need to generate that feeling amongst programmers that it is
important.


You have to create marketing benefits for vendors, who will sell to
managers, who will tell programmers what is important to them.

--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Br**********@CSi.com (Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
fake address use address above to reply
Nov 14 '05 #157

P: n/a
In <ch**********@pegasus.csx.cam.ac.uk> Joseph Myers <js***@gcc.gnu.org> writes:
In article <ch**********@sunnews.cern.ch>, Dan Pop <Da*****@cern.ch> wrote:
And GNU C is the very last language you'd want a new C standard to
conflict with. Without these conflicts, gcc would have been a conforming
C99 translator by the time gcc 3.0 was released or shortly afterwards and
this would have created the motivation for commercial implementors to
support C99, too. End result: success instead of failure.
As I stated in my previous posting on the subject, this is not an
accurate representation of the reasons behind the status of C
standards implementation in GCC.


Each person somewhat connected to the gcc project has his own opinions on
this issue, as seen over the years, when this topic is discussed.

What is a fact is that most gcc issues are related to conflicts between
GNU C and C99 for the same feature, not for missing support for C99
features with no conflicting GNU C equivalent.
GCC does not have a conforming C90 implementation either. I have been


The gcc documentation claims otherwise.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #158

P: n/a
In <f_********************@fe1.news.blueyonder.co.u k> David Hopwood <da******************@blueyonder.co.uk> writes:
Richard Tobin wrote:
David Hopwood <da******************@blueyonder.co.uk> wrote:
You mean malloc() causes undefined behaviour when there is insufficient
memory?


I mean malloc() may return a non-null value and then fail when you
try to actually use the memory. Presumably you already know about this.


You mean overcomittment of virtual memory, then? The behaviour in that
case is not entirely undefined; most platforms that overcommit virtual
memory will start killing processes, but will not cause them to have
otherwise undefined behaviour.


This is no different from the behaviour of a program exceeding its
stack limit (e.g. due to using VLAs). In either case, the program gets
killed.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #159

P: n/a
In <ch*********@pegasus.csx.cam.ac.uk> Joseph Myers <js***@gcc.gnu.org> writes:
The C99 implementation could be completed in a few months. (That is
for targets with sane floating-point hardware; not 387 floating point
on x86 which makes it hard to do computations in the range and
precision of float / double or round those done in long double to the
range and precision of float / double without storing to memory.)


Any chance to get -ffloat-store to solve this issue reliably? Right now,
it only fixes *some* precision-related issues.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #160

P: n/a
In article <f_********************@fe1.news.blueyonder.co.uk> ,
David Hopwood <da******************@blueyonder.co.uk> wrote:
You mean overcomittment of virtual memory, then?
Yes.
The behaviour in that
case is not entirely undefined; most platforms that overcommit virtual
memory will start killing processes, but will not cause them to have
otherwise undefined behaviour.


True, but (on the platforms I use) the undefined behaviour resulting
from stack overflow turns out to be to kill the process (or maybe some
other process: I suspect that stack space is overcomitted in the same
way as heap space), so the practical result is that malloc is no safer
than stack-allocation.

-- Richard
Nov 14 '05 #161

P: n/a
In article <kr********************@comcast.com>,
Douglas A. Gwyn <DA****@null.net> wrote:
For me, the behaviour of malloc() is not the only consideration in
choosing a platform.


What, reliable execution of carefully written programs
is not important?


I assume you can see the disparity between what I said and how you
replied, but to be explicit: the behaviour of malloc() is not the only
consideration in the reliable behaviour of carefully written programs,
nor is the reliable behaviour of carefully written programs the only
consideration in choosing a platform.

-- Richard

Nov 14 '05 #162

P: n/a
In <41***************@yahoo.com> CBFalconer <cb********@yahoo.com> writes:
This may be the wrong venue in which to ask, but how many
programmers really use the GNU extensions?


Once you define the language of your project as being GNU C, you have no
good reason not to use them. Two well known such projects are the Linux
kernel and glibc v2.

To some people and projects, performance is more important than being
able to compile with a different compiler and gcc has plenty of obscure
extensions that are aimed at improving the performance of the generated
code in ways not possible in standard C.

The GNU C extensions are important enough in the Linux world that the
Intel C compiler *attempts* to be front-end compatible with gcc (i.e.
implement the same extensions with identical semantics). This
compatibility is not good enough for projects pushing the envelope as
far as possible (e.g. the Linux kernel doesn't compile out of the box
with the Intel compiler; then again it doesn't compile with *any* gcc
version, either) but it works for most software using the extensions
specified in the "Extensions to the C Language Family" chapter of the
gcc documentation.

The same chapter that should have served as a source of inspiration
to the people who elaborated the C99 standard, especially considering
that it was describing *existing practice*, which C99 was supposed
to actually standardise.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #163

P: n/a
In <ln************@nuthaus.mib.org> Keith Thompson <ks***@mib.org> writes:
Chris Hills <ch***@phaedsys.org> writes:
[...]
The standard is freely available from a multitude of sources.. The
recognised test suites are not. they cost a lot of money. However there
is a lot of work in them.


If the standard is freely available, I wasted $18. Are you referring
to the draft versions?


A Google search will reveal the availability of pirate copies of the
official standard. Whether this counts as "freely available" or not is
another issue...

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #164

P: n/a
Dan Pop wrote:
CBFalconer <cb********@yahoo.com> writes:
This may be the wrong venue in which to ask, but how many
programmers really use the GNU extensions?


Once you define the language of your project as being GNU C, you
have no good reason not to use them. Two well known such projects
are the Linux kernel and glibc v2.

To some people and projects, performance is more important than
being able to compile with a different compiler and gcc has plenty
of obscure extensions that are aimed at improving the performance
of the generated code in ways not possible in standard C.

The GNU C extensions are important enough in the Linux world that
the Intel C compiler *attempts* to be front-end compatible with
gcc (i.e. implement the same extensions with identical semantics).
This compatibility is not good enough for projects pushing the
envelope as far as possible (e.g. the Linux kernel doesn't compile
out of the box with the Intel compiler; then again it doesn't
compile with *any* gcc version, either) but it works for most
software using the extensions specified in the "Extensions to the
C Language Family" chapter of the gcc documentation.


Looking at that chapters menu heading very little appears (to me)
to apply to such 'efficiency' cases. The exceptions are: * Nested
Functions:: ; * Constructing Calls:: ; * Typeof:: ; and * Return
Address::, where Nested Functions would primarily apply to
compiling Pascal and Ada, and the others seem applicable to OS
building.

Have you some specific examples where they make a measurable
difference? Such a place (in the OS) would probably have to do
with rescheduling or handling interrupts (latency). Gdb has to
come in there somewhere.

--
"I'm a war president. I make decisions here in the Oval Office
in foreign policy matters with war on my mind." - Bush.
"If I knew then what I know today, I would still have invaded
Iraq. It was the right decision" - G.W. Bush, 2004-08-02
Nov 14 '05 #165

P: n/a
Richard Kettlewell wrote:
I'd use C99 language features like designated initializers at work if
all the compilers we build with supported them; but only if they *all*
did, since otherwise I'd have to come up with a workaround for
everything else and there'd be no point using the new feature in the
first place. So there is a bootstrapping problem.
Exactly. Even though there was a relative rush to
bring out C89-conforming compilers, for many years
we still had to create source code that could be
compiled on platforms that only had Reiser cpp/PCC
compilers. Thus we heavily relied on macros to
implement "impedance transformers" for function
declarations etc. and had to avoid depending too
much on features available only in C89.
Library features are easier to start using before the implementors
have all caught up as sometimes you can do your own version for
obsolete platforms.


Yes, or find somebody who has done it as open source,
for example http://www.lysator.liu.se/c/q8/

Nov 14 '05 #166

P: n/a
CBFalconer wrote:
"Extensions to the
C Language Family" chapter of the gcc documentation.


Looking at that chapters menu heading very little appears (to me)
to apply to such 'efficiency' cases. The exceptions are: * Nested
Functions:: ; * Constructing Calls:: ; * Typeof:: ; and * Return
Address::, (...)


Some others - though I have no measurements for any of them:

Labels as values looks like they could be more efficient than if or
switch statements in some cases.

'inline' is certainly useful for optimization. (That's is older than
C99, remember. Unless I've missed a part of this thread which says you
are only talking about gcc vs C99.) So are typeof & statement
expressions, which let you write macros that would otherwise be
cumbersome to deal with.

Compound literals and designated initializers make it easier to write
code which initializes things statically instead of at run-time.

Some function attributes let you inform the compiler of optimizations it
can do - like 'pure', 'const', 'malloc', maybe 'noreturn'.

More machine-specific hacking: Explicit register variables, including
global register variables, and asm statements.

The vector extension, whatever that is.

Under 'Other builtins:' At least __builtin_choose_expr,
__builtin_constant_p, __builtin_expect, __builtin_prefetch. Some are
compiler hints, others are for the programmer to provide a fast version
of some piece of code which the compiler can use in some cases.

--
Hallvard
Nov 14 '05 #167

P: n/a
Hallvard B Furuseth wrote:
CBFalconer wrote:
"Extensions to the
C Language Family" chapter of the gcc documentation.


Looking at that chapters menu heading very little appears (to me)
to apply to such 'efficiency' cases. The exceptions are: * Nested
Functions:: ; * Constructing Calls:: ; * Typeof:: ; and * Return
Address::, (...)


Some others - though I have no measurements for any of them:

Labels as values looks like they could be more efficient than if
or switch statements in some cases.

'inline' is certainly useful for optimization. (That's is older
than C99, remember. Unless I've missed a part of this thread
which says you are only talking about gcc vs C99.) So are typeof
& statement expressions, which let you write macros that would
otherwise be cumbersome to deal with.

Compound literals and designated initializers make it easier to
write code which initializes things statically instead of at
run-time.


I wasn't thinking about usefulness, just about things that allow
creation of more efficient code in some environment. If there is
another way to express something that allows the efficient code,
and remains standard, I see no point in the extension.

--
"I'm a war president. I make decisions here in the Oval Office
in foreign policy matters with war on my mind." - Bush.
"If I knew then what I know today, I would still have invaded
Iraq. It was the right decision" - G.W. Bush, 2004-08-02
Nov 14 '05 #168

P: n/a
Hallvard B Furuseth <h.**********@usit.uio.no> wrote:
CBFalconer wrote:
"Extensions to the
C Language Family" chapter of the gcc documentation.
Looking at that chapters menu heading very little appears (to me)
to apply to such 'efficiency' cases. The exceptions are: * Nested
Functions:: ; * Constructing Calls:: ; * Typeof:: ; and * Return
Address::, (...)


Some others - though I have no measurements for any of them:

Labels as values looks like they could be more efficient than if or
switch statements in some cases.


Labels as values are a maintenance nightmare, and probably not very
optimisable, either - poof goes efficiency. I'm glad C99 doesn't include
this misfeature. C is not Sinclair Basic, after all.
'inline' is certainly useful for optimization. (That's is older than
C99, remember. Unless I've missed a part of this thread which says you
are only talking about gcc vs C99.)
And inline _is_ in C99, so what's the problem?
So are typeof & statement
expressions, which let you write macros that would otherwise be
cumbersome to deal with.
This is the only feature mentioned in this thread which C99 doesn't
have, and which I think could be useful.
Some function attributes let you inform the compiler of optimizations it
can do - like 'pure', 'const', 'malloc', maybe 'noreturn'.
And they're generally very unportable.
More machine-specific hacking: Explicit register variables, including
global register variables, and asm statements.


Machine-specific. Exactly. Nice for a compiler, _not_ good for a
Standard.

Richard
Nov 14 '05 #169

P: n/a
In <41***************@yahoo.com> CBFalconer <cb********@yahoo.com> writes:
Dan Pop wrote:
CBFalconer <cb********@yahoo.com> writes:
This may be the wrong venue in which to ask, but how many
programmers really use the GNU extensions?


Once you define the language of your project as being GNU C, you
have no good reason not to use them. Two well known such projects
are the Linux kernel and glibc v2.

To some people and projects, performance is more important than
being able to compile with a different compiler and gcc has plenty
of obscure extensions that are aimed at improving the performance
of the generated code in ways not possible in standard C.

The GNU C extensions are important enough in the Linux world that
the Intel C compiler *attempts* to be front-end compatible with
gcc (i.e. implement the same extensions with identical semantics).
This compatibility is not good enough for projects pushing the
envelope as far as possible (e.g. the Linux kernel doesn't compile
out of the box with the Intel compiler; then again it doesn't
compile with *any* gcc version, either) but it works for most
software using the extensions specified in the "Extensions to the
C Language Family" chapter of the gcc documentation.


Looking at that chapters menu heading very little appears (to me)
to apply to such 'efficiency' cases.


I was mostly referring to stuff named with the __builtin prefix, some of
it not even documented in that chapter. Other features like asm() and
certain function attributes also have a significant impact on code
performance.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 14 '05 #170

P: n/a
Allin Cottrell wrote:

Surely this is putting the chicken before the horse.


Surely you mean the cart before the egg? :-)
Nov 14 '05 #171

P: n/a

In article <nj*******************@newssvr29.news.prodigy.com> , "Mabden" <mabden@sbc_global.net> writes:
"Michael Wojcik" <mw*****@newsguy.com> wrote in message
news:ch*********@news4.newsguy.com...
In article <1H******************@newssvr27.news.prodigy.com >, "Mabden" <mabden@sbc_global.net> writes:

Understood, but I feel like I'm in a Monty Python skit at this point.
"An argument is a connected series of statements intended to establish a
proposition. It's not just saying, 'No it isn't.'"


I have seen several responses to your previous posts, including my
own, which contained significant information and were not simply
contradiction. In fact, I don't see anywhere in my post where I
contradicted anything you wrote.


Which makes this reply so confusing. Why are you going back and making a new
thread on this old topic.


I'm afraid you're confused, not my reply. I didn't start a new thread
(just double-checked the threading in Google Groups, and it's correct).
My reply was posted all of two days after your message, so it's hardly
old. This reply is a bit later, due to the long weekend, but still well
within the normal range for Usenet.
I responded to your last thread, but now we're getting circular here.
I fail to see this circularity. I commented on your message; you
replied and claimed you were simply being contradicted; I pointed out
that you were wrong about that as well. Neither of us returned to
arguments previously offered.
The "troll alert" messages to which you're objecting alert readers
to an established history of questionable posting. That's not the
simple attack you make it out to be.


But what I'm opbjecting to is the AUTOMATIC "troll alert". Every statement
was being followed by a no-content misive (sounds like missile, get it)
"warning".


This isn't true, as Falconer already noted: he does *not* post his
"troll alert" responses to every post by Tisdale. Sometimes
Tisdale's posts are correct, and he leaves those unmolested. The
same appears to be true for anyone else posting "troll alert"
messages. There's nothing "automatic" about it.
And slightly insulting to readers;
who are "you" (not you personally, but the troll alert posters) to tell me
who's a troll. Will you tell me which books and magazines to read next?!


Now you're getting circular; this is the same argument I addressed
in my first reply to you in this thread.
Just killfile him and YOU won't have to hear what he has to say, and the
rest of us can go on listening to whomever we choose.


I don't know why you think I don't want to read ERT's posts, or why
you believe my refraining from doing so has any effect on whom you
read.


Huh? I just don't want the warnings.


Try reading this again.

You wrote: "Just killfile him".

I responded: "I don't know why your think I don't want to read ERT's
posts". In other words, I don't wish to killfile him. Your assumption
that I do is invalid.

You also wrote: "the rest of us can go on listening to whomever we
choose".

I pointed out that someone else's actions have no effect on whom you
choose to read. Simple as that.

Frankly, it sounds to me like you're trying to defend a position when
your initial arguments have proven unconvincing to your audience, and
you don't have any new ones to introduce. That doesn't mean you've
"lost" (since the argument has no consequences, that's a meaningless
evaluation) or that you must change your mind, but complaining when
people reply to answer your questions or respond to your statements
is not a productive rhetorical maneuver.

Shall we summarize the situation as: you don't want to read these
"troll alerts", but you've failed to show a convincing argument why
they shouldn't be written? If that agrees with your understanding,
then we can drop this subthread.

--
Michael Wojcik mi************@microfocus.com

Push up the bottom with your finger, it will puffy and makes stand up.
-- instructions for "swan" from an origami kit
Nov 14 '05 #172

P: n/a

In article <kr********************@comcast.com>, "Douglas A. Gwyn" <DA****@null.net> writes:
Richard Tobin wrote:
I mean malloc() may return a non-null value and then fail when you
try to actually use the memory. Presumably you already know about this.


I know that such an implementation is badly broken.


Oh good, the lazy-allocation religious war has broken out again.

If no implementation is allowed to fail under the conditions "malloc
returns non-null and later the program tries to use the returned
memory", then no real implementations are not broken. Memory access
can fail for reasons beyond the implementation's control.

Lazy allocation has its proponents and opponents, and both have
reasonable arguments at their disposal, but this "lazy allocation
breaks conformance" one is not among them. Even if it's true (and no
one has presented a convincing proof of that, IMO), it's pointless,
since many of us have to use implementations on platforms which use
lazy allocation. If those implementations conform except in using
lazy allocation, that will have to be good enough, since there is no
alternative.

Hell, I'd like it if all the platforms I had to support were completely
free of bugs. Barring that, I can't guarantee "reliable execution of
carefully written programs" anyway. They aren't. Shall I hold off on
further development until I can "get a better platform"?

--
Michael Wojcik mi************@microfocus.com

She felt increasingly (vision or nightmare?) that, though people are
important, the relations between them are not, and that in particular
too much fuss has been made over marriage; centuries of carnal
embracement, yet man is no nearer to understanding man. -- E M Forster
Nov 14 '05 #173

P: n/a
ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
In article <f_********************@fe1.news.blueyonder.co.uk> ,
David Hopwood <da******************@blueyonder.co.uk> wrote:
The behaviour in that
case is not entirely undefined; most platforms that overcommit virtual
memory will start killing processes, but will not cause them to have
otherwise undefined behaviour.


True, but (on the platforms I use) the undefined behaviour resulting
from stack overflow turns out to be to kill the process (or maybe some
other process: I suspect that stack space is overcomitted in the same
way as heap space), so the practical result is that malloc is no safer
than stack-allocation.


Even so, malloc() is safer, but only because you can sometimes turn
memory overcommitting off; it would be hard do that with stack
overflows.

Richard
Nov 14 '05 #174

P: n/a
In article <41***************@yahoo.com>,
CBFalconer <cb********@worldnet.att.net> wrote:

I believe the Dinkum effort is a library, not
a compiler. Comeau uses it.


My understading is that the Dinkum library claims conformance to the
C99 library description, the Comeau compiler claims conformance to the
C99 language description, and they play nicely together. This means
that neither is (or claims to be) a complete C99 implementation, but
the combination is.
dave

--
Dave Vandervies dj******@csclub.uwaterloo.ca
Practical Solution #1: Kill the programmer in question. This is
also the most satisfying solution, because it ensures that the
problem will not recur. --Ben Pfaff in comp.lang.c
Nov 14 '05 #175

P: n/a
dj******@csclub.uwaterloo.ca (Dave Vandervies) writes:
Practical Solution #1: Kill the programmer in question. This is
also the most satisfying solution, because it ensures that the
problem will not recur. --Ben Pfaff in comp.lang.c


I really said that? Wow, I must have been having a bad day.
--
Ben Pfaff
email: bl*@cs.stanford.edu
web: http://benpfaff.org
Nov 14 '05 #176

P: n/a
Richard Bos wrote:
ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:

.... snip ...

True, but (on the platforms I use) the undefined behaviour
resulting from stack overflow turns out to be to kill the
process (or maybe some other process: I suspect that stack space
is overcomitted in the same way as heap space), so the practical
result is that malloc is no safer than stack-allocation.


Even so, malloc() is safer, but only because you can sometimes
turn memory overcommitting off; it would be hard do that with
stack overflows.


Smarter systems, using segments rather than the primitive linear
addressing mode :-), simply enlarge the stack segment on overflow
and carry on. This can eventually fail for lack of memory,
virtual or real.

--
"I'm a war president. I make decisions here in the Oval Office
in foreign policy matters with war on my mind." - Bush.
"If I knew then what I know today, I would still have invaded
Iraq. It was the right decision" - G.W. Bush, 2004-08-02
Nov 14 '05 #177

P: n/a
In article <ln************@nuthaus.mib.org>, Keith Thompson <kst-
u@mib.org> writes
Chris Hills <ch***@phaedsys.org> writes:
[...]
The standard is freely available from a multitude of sources.. The
recognised test suites are not. they cost a lot of money. However there
is a lot of work in them.


If the standard is freely available, I wasted $18. Are you referring
to the draft versions?

It is freely available from many places.. I did not say it was free.
PC's are freely available and they are not free.

What is this obsession with having the ISO-C standard free?
DO you expect all your tools to be free?

PLEASE can we not go round this loop again!
The C standard costs a small amount of money.
An insignificant amount for any professional.
The world is not a charity.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Nov 14 '05 #178

P: n/a
In article <ch**********@sunnews.cern.ch>, Dan Pop <Da*****@cern.ch>
writes
In <ln************@nuthaus.mib.org> Keith Thompson <ks***@mib.org> writes:
Chris Hills <ch***@phaedsys.org> writes:
[...]
The standard is freely available from a multitude of sources.. The
recognised test suites are not. they cost a lot of money. However there
is a lot of work in them.


If the standard is freely available, I wasted $18. Are you referring
to the draft versions?


A Google search will reveal the availability of pirate copies of the
official standard. Whether this counts as "freely available" or not is
another issue...

Dan


No the word is Illegal.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Nov 14 '05 #179

P: n/a
In article <K_********************@comcast.com>, Douglas A. Gwyn
<DA****@null.net> writes
Chris Hills wrote:
I am confused.. the USERS of the compiler were saying ...


Sorry, I misread it in context as saying that some
compiler vendors were encouraging that attitude.


Some may imply that..... See the new proposed secure library
In other words there are many users out there who think the standards
committee should be writing the standard to fit the compilers!


Yes, that kind of problem has existed "forever".
It is a matter of education, not of standards.
So how do you sell the importance of C99 to people like that and there
are a LOT of them.


Until they are educated about the value of
standards *and why as programmers they should
conform to them as much as possible*, they
aren't going to appreciate C99's standardizing
additional features that they have a good use
for.


So how do we convince them? Most still talk of K&R (usually V1) or
"ANSI-C" as a vague sort thing they have no idea what ISO-C is.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Nov 14 '05 #180

P: n/a
Chris Hills <ch***@phaedsys.org> writes:
In article <ln************@nuthaus.mib.org>, Keith Thompson <kst-
u@mib.org> writes
Chris Hills <ch***@phaedsys.org> writes:
[...]
The standard is freely available from a multitude of sources.. The
recognised test suites are not. they cost a lot of money. However there
is a lot of work in them.
If the standard is freely available, I wasted $18. Are you referring
to the draft versions?


It is freely available from many places.. I did not say it was free.
PC's are freely available and they are not free.


It's easy to interpret the phrase "freely available" as implying "free
of cost", especially since there's been a great deal of controversy
over whether the standard *should* be free of cost, there are draft
versions that are both freely available and free of cost, and your
next sentence referred to test suites costing a lot of money. (Are
the test suites "freely available" to anyone who has enough money, or
are there other restrictions?)

Thank you for the clarification, but I'm sure I'm not the only one who
interpreted your words the same way.
PLEASE can we not go round this loop again!


Ok.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #181

P: n/a
CBFalconer wrote:
Hallvard B Furuseth wrote:
CBFalconer wrote:
"Extensions to the
C Language Family" chapter of the gcc documentation.

(...)
'inline' is certainly useful for optimization. (...) So are typeof
& statement expressions, (...)

Compound literals and designated initializers make it easier to
write code which initializes things statically instead of at
run-time.


I wasn't thinking about usefulness, just about things that allow
creation of more efficient code in some environment. If there is
another way to express something that allows the efficient code,
and remains standard, I see no point in the extension.


I'm not sure if you are only addressing the last paragraph or all three,
but: When an improvement (optimization or whatever) gets too cumbersome,
it will often not be done. Like if one had to write a program to
generate the program, or write 100 macros instead of 10.

--
Hallvard
Nov 14 '05 #182

P: n/a
Richard Bos wrote:
Hallvard B Furuseth <h.**********@usit.uio.no> wrote:
CBFalconer wrote:
"Extensions to the
C Language Family" chapter of the gcc documentation.

Looking at that chapters menu heading very little appears (to me)
to apply to such 'efficiency' cases. The exceptions are: * Nested
Functions:: ; * Constructing Calls:: ; * Typeof:: ; and * Return
Address::, (...)
Some others - though I have no measurements for any of them:

Labels as values looks like they could be more efficient than if or
switch statements in some cases.


Labels as values are a maintenance nightmare, and probably not very
optimisable, either - poof goes efficiency.


They should certainly be used sparingly, but if you have to do a
computed goto anyway, then I don't see how e.g. a switch full of
'case foo: goto bar;' is more optimisable.
I'm glad C99 doesn't include
this misfeature. C is not Sinclair Basic, after all.

'inline' is certainly useful for optimization. (That's is older than
C99, remember. Unless I've missed a part of this thread which says you
are only talking about gcc vs C99.)


And inline _is_ in C99, so what's the problem?


It's not a problem, quite on the countrary. I mentioned it because
the article I was replying to said there were very few 'efficiency'
extensions to gcc. This is one.
So are typeof & statement
expressions, which let you write macros that would otherwise be
cumbersome to deal with.


This is the only feature mentioned in this thread which C99 doesn't
have, and which I think could be useful.


I don't see what's wrong with 'pure' etc below, nor with the features
you snipped below my machine-specific examples.
Some function attributes let you inform the compiler of optimizations it
can do - like 'pure', 'const', 'malloc', maybe 'noreturn'.


And they're generally very unportable.


Unportable? If you mean they are not standardized, then yes of course
they are unportable. If you mean they would be unportable even if
standardized, I don't know what you mean. It's simply information to
the compiler about the function, which the compiler may make use of.
More machine-specific hacking: Explicit register variables, including
global register variables, and asm statements.


Machine-specific. Exactly. Nice for a compiler, _not_ good for a
Standard.


Yes. I had the impression that this subthread was about extensions in
GCC in general, not only the ones which might be put in the standard.

--
Hallvard
Nov 14 '05 #183

P: n/a
On Tue, 7 Sep 2004 20:33:25 +0100 in comp.std.c, Chris Hills
<ch***@phaedsys.org> wrote:
In article <K_********************@comcast.com>, Douglas A. Gwyn
<DA****@null.net> writes
Chris Hills wrote:
I am confused.. the USERS of the compiler were saying ...


Sorry, I misread it in context as saying that some
compiler vendors were encouraging that attitude.


Some may imply that..... See the new proposed secure library


Links? References?
In other words there are many users out there who think the standards
committee should be writing the standard to fit the compilers!

Additions of niche features to the standard that will not be backed
by purchasing compilers/upgrades is pointless. None of GNU, MS, or
embedded vendors seem to be in any hurry to offer C99 compliance.
Yes, that kind of problem has existed "forever".
It is a matter of education, not of standards.
So how do you sell the importance of C99 to people like that and there
are a LOT of them.


Until they are educated about the value of
standards *and why as programmers they should
conform to them as much as possible*, they
aren't going to appreciate C99's standardizing
additional features that they have a good use
for.


So how do we convince them? Most still talk of K&R (usually V1) or
"ANSI-C" as a vague sort thing they have no idea what ISO-C is.


Few are aware that there was a C99 standard, as they have not seen
implementations offer it: the committees and NBs have to come up with
compelling reasons to upgrade and get them mentioned on the web.

--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Br**********@CSi.com (Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
fake address use address above to reply
Nov 14 '05 #184

P: n/a
CBFalconer <cb********@yahoo.com> wrote:
Richard Bos wrote:
ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:

... snip ...

True, but (on the platforms I use) the undefined behaviour
resulting from stack overflow turns out to be to kill the
process (or maybe some other process: I suspect that stack space
is overcomitted in the same way as heap space), so the practical
result is that malloc is no safer than stack-allocation.


Even so, malloc() is safer, but only because you can sometimes
turn memory overcommitting off; it would be hard do that with
stack overflows.


Smarter systems, using segments rather than the primitive linear
addressing mode :-), simply enlarge the stack segment on overflow
and carry on. This can eventually fail for lack of memory,
virtual or real.


Of course. And at that point, on a correctly[1] configured system,
malloc() will tell you that this happened, while an overflowing stack
cannot do so, no matter how you've set up the system.

Richard

[1] For the circumstances; bugger advocacy
Nov 14 '05 #185

P: n/a
Richard Bos wrote:
Of course. And at that point, on a correctly[1] configured system,
malloc() will tell you that this happened, while an overflowing stack
cannot do so, no matter how you've set up the system.


Not only that, it isn't very hard (in such a hostile
environment) for malloc to access all the supposedly-
allocated region reported by the OS function in order
to determine whether it is indeed accessible. The only
problem is in catching whatever form of exception might
result (e.g. catch SIGSEGV) and recovering gracefully,
after which malloc can return a null pointer.
Nov 14 '05 #186

P: n/a
Michael Wojcik wrote:
Memory access
can fail for reasons beyond the implementation's control.
Normally that would only occur due to hardware or system
failure, not by intentional design.
Hell, I'd like it if all the platforms I had to support were completely
free of bugs. Barring that, I can't guarantee "reliable execution of
carefully written programs" anyway. They aren't. Shall I hold off on
further development until I can "get a better platform"?


No, but you should certainly complain to providers of
broken systems about their bugs and other infelicities.
While you're at it, also complain to the malloc
provider for not properly dealing with known system
characteristics.
Nov 14 '05 #187

P: n/a
Brian Inglis wrote:
<ch***@phaedsys.org> wrote:
<DA****@null.net> writes
Sorry, I misread it in context as saying that some
compiler vendors were encouraging that attitude.

Some may imply that..... See the new proposed secure library

Links? References?


Apparently Chris is referring to a proposal for WG14 to
work on more "secure" (mainly anti-buffer-overflow)
alternatives to existing C standard library functions.
Microsoft delivered a presentation using examples that
they have already implemented as part of their extended
C library. See
http://www.open-std.org/jtc1/sc22/wg.../docs/n997.pdf
for an associated official document.
Some people have claimed that this is a proposal to lock
the C standard into a proprietary vendor product, but
that is patently false, as can be seen by reading the
proposal. Randy Meyers of WG14 is currently spearheading
the effort to prepare an actual TR in this area.
Nov 14 '05 #188

P: n/a
In <41***************@null.net> "Douglas A. Gwyn" <DA****@null.net> writes:
Richard Bos wrote:
Of course. And at that point, on a correctly[1] configured system,
malloc() will tell you that this happened, while an overflowing stack
cannot do so, no matter how you've set up the system.


Not only that, it isn't very hard (in such a hostile
environment) for malloc to access all the supposedly-
allocated region reported by the OS function in order
to determine whether it is indeed accessible. The only
problem is in catching whatever form of exception might
result (e.g. catch SIGSEGV) and recovering gracefully,
after which malloc can return a null pointer.


That would defeat one of the greatest advantages of lazy swap allocation:
sparse arrays implemented as ordinary arrays.

Lazy swap allocation was designed with a purpose in mind and there is no
point for the C implementation to defeat it. People who don't want it
can either disable it (on certain platforms, it can be done by the user,
on a per process basis, on others it requires a reboot) or choose a
platform supporting eager swap allocation only.

Lazy swap allocation doesn't affect the implementation's conformance any
more than the possibility to kill any process before normal termination
and I have yet to see people complaining about this feature of most
operating systems.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #189

P: n/a
In article <k7********************************@4ax.com>, Brian Inglis
<Br**********@SystematicSW.Invalid> writes
On Tue, 7 Sep 2004 20:33:25 +0100 in comp.std.c, Chris Hills
<ch***@phaedsys.org> wrote:
In article <K_********************@comcast.com>, Douglas A. Gwyn
<DA****@null.net> writes
Chris Hills wrote:
I am confused.. the USERS of the compiler were saying ...

Sorry, I misread it in context as saying that some
compiler vendors were encouraging that attitude.


Some may imply that..... See the new proposed secure library


Links? References?


MS are proposing a new secure library for C... all 2000 of functions.
The document should be on the standards web site.

It is currently being discussed by the NB's and W14
In other words there are many users out there who think the standards
committee should be writing the standard to fit the compilers!
Additions of niche features to the standard that will not be backed
by purchasing compilers/upgrades is pointless.


I agree.
Until they are educated about the value of
standards *and why as programmers they should
conform to them as much as possible*, they
aren't going to appreciate C99's standardizing
additional features that they have a good use
for.


So how do we convince them? Most still talk of K&R (usually V1) or
"ANSI-C" as a vague sort thing they have no idea what ISO-C is.


Few are aware that there was a C99 standard, as they have not seen
implementations offer it: the committees and NBs have to come up with
compelling reasons to upgrade and get them mentioned on the web.

On the other hand why haven't the compiler implementors included the new
parts of C99 as the have gone along? People are saying that it is not
difficult to do. So you would have though that, as most of the compilers
have had new versions in the last 5 years there would be a lot more C99
compilers.

At the moment there is only one produced by a main stream commercial
company.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Nov 14 '05 #190

P: n/a
Chris Hills wrote:
MS are proposing a new secure library for C... all 2000 of functions.
No. They said that their *full* extended C library contains
that many functions. They're certainly not proposing that
the C standard adopt all of them. We are certainly looking
at their interfaces and specs for relevant "secure" versions
of standard functions, since we are interested in pursuing
this area anyway. (In fact we still hear complaints about
gets().)
On the other hand why haven't the compiler implementors included the new
parts of C99 as the have gone along?


The compilers I use have certainly been incorporating more new
C99 features with each release. I think the one on my main
hosted platform now claims full conformance.
Nov 14 '05 #191

P: n/a
According to Dan Pop <Da*****@cern.ch>:
Lazy swap allocation doesn't affect the implementation's conformance any
more than the possibility to kill any process before normal termination
and I have yet to see people complaining about this feature of most
operating systems.


My confused neural system currently spews out the following bit of
information, which I transcribe here with absolutely no warranty of
exactness or suitability for the purpose of the present discussion:

Some years ago, ftp.cdrom.com was a big downloading site. At one point,
they used Alpha hardware with OSF/1. They could handle about 700
simultaneous connected clients. It turned out that by activating lazy
swap allocation, that number raised to 3000. The ftp server software,
combined with the libc malloc() implementation, was allocating much more
memory than what was actually needed, thus limiting arbitrarily the
number of concurrent instances, unless some sort of overcommit was used.
With overcommit, the server was nonetheless quite stable, so the overall
result was a net gain.
--Thomas Pornin
Nov 14 '05 #192

P: n/a
Chris Hills <ch***@phaedsys.org> wrote in message news:<cN**************@phaedsys.demon.co.uk>...
most of the C programming is in the embedded world


Do you have any evidence for that claim?

Of course there are a lot of embedded _processors_ around,
but it would be a vast and shaky leap to go from "most processors"
to "most programming".
Nov 14 '05 #193

P: n/a
In <ch***********@biggoron.nerim.net> po****@nerim.net (Thomas Pornin) writes:
According to Dan Pop <Da*****@cern.ch>:
Lazy swap allocation doesn't affect the implementation's conformance any
more than the possibility to kill any process before normal termination
and I have yet to see people complaining about this feature of most
operating systems.


My confused neural system currently spews out the following bit of
information, which I transcribe here with absolutely no warranty of
exactness or suitability for the purpose of the present discussion:

Some years ago, ftp.cdrom.com was a big downloading site. At one point,
they used Alpha hardware with OSF/1. They could handle about 700
simultaneous connected clients. It turned out that by activating lazy
swap allocation, that number raised to 3000. The ftp server software,
combined with the libc malloc() implementation, was allocating much more
memory than what was actually needed, thus limiting arbitrarily the
number of concurrent instances, unless some sort of overcommit was used.
With overcommit, the server was nonetheless quite stable, so the overall
result was a net gain.


Exactly 10 years ago, I've got a low end Alpha DEC/OSF1 system as
my desktop machine. The thing had 64 MB of RAM and 128 MB of swap,
which was quite reasonable for a Unix workstation at the time (Linux
was perfectly happy with a lot less). Without lazy swap allocation,
by the time my X session had started, most of the swap space was
already allocated. After starting a few X clients, the system was
out of virtual memory resources.

Since increasing the size of the swap partition was not very practical,
I tried switching to lazy swap allocation. The effects were impressive:
at the end of the X session startup procedure, not a single page of swap
space was allocated. I could start as many X clients as I needed without
using more than half of my swap space. The only time when I ran into
troubles was when a netscape process got mad...

Sad conclusion: in the presence of so much software that overallocates
memory, lazy swap allocation is a must for people who cannot afford
wasting virtual memory resources. With the cheap disks of today, this
is far less an issue today than it was back then, however.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #194

P: n/a
In article <19**************************@posting.google.com >, Fergus
Henderson <fj************@galois.com> writes
Chris Hills <ch***@phaedsys.org> wrote in message news:<cNSlQKDyV4OBFAdJ@phaedsy
s.demon.co.uk>...
most of the C programming is in the embedded world


Do you have any evidence for that claim?

Of course there are a lot of embedded _processors_ around,
but it would be a vast and shaky leap to go from "most processors"
to "most programming".


What C programming isn't embedded. By which I mean pure C as opposed to
the pseudo C people do with C++ compilers.

When you thing that anything with electricity applied to it has a micro
these days. autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc etc That is a lot of
devices that get programmed in C.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Nov 14 '05 #195

P: n/a
In article <41***************@null.net>, Douglas A. Gwyn
<DA****@null.net> writes
Chris Hills wrote:
MS are proposing a new secure library for C... all 2000 of functions.


No. They said that their *full* extended C library contains
that many functions. They're certainly not proposing that
the C standard adopt all of them. We are certainly looking
at their interfaces and specs for relevant "secure" versions
of standard functions, since we are interested in pursuing
this area anyway. (In fact we still hear complaints about
gets().)
On the other hand why haven't the compiler implementors included the new
parts of C99 as the have gone along?


The compilers I use have certainly been incorporating more new
C99 features with each release. I think the one on my main
hosted platform now claims full conformance.


What is it?
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Nov 14 '05 #196

P: n/a
Chris Hills wrote:
What C programming isn't embedded?
By which I mean pure C
as opposed to the pseudo C people do with C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.


Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.
Nov 14 '05 #197

P: n/a
On 2004-09-09, Dan Pop <Da*****@cern.ch> wrote:
Sad conclusion: in the presence of so much software that overallocates
memory, lazy swap allocation is a must for people who cannot afford
wasting virtual memory resources. With the cheap disks of today, this
is far less an issue today than it was back then, however.


It's not just explicit overallocation as such. There's also a lot of
implicit swap allocation.

If you want to really turn overcommit off, you have to reserve swap
not just for malloc but any time you do a virtual memory operation
that could lead to copying a page later: forking, for instance, or
mapping a file into memory. Since most Unix systems nowadays have
shared libraries that are loaded with mmap, and the libraries continue
to bloat out, you can end up with a *lot* of pointlessly reserved swap
space even by current standards.

The general (but, of course, never universal) consensus among OS
implementors is that fully conservative swap allocation is
unreasonably expensive, and that changing the behavior of just malloc
(so processes can still randomly die on other blocks of memory if you
run out of swap) isn't a particularly good idea. It's also not clear
that if the system *does* run out of swap that having malloc start
failing is the right response, either. This issue was discussed to
death on the Linux kernel mailing list a few years back under the
general heading of "OOM" (out of memory) if anyone's interested in one
particular take on the gory details.

What this means from a C perspective is that swap overcommit isn't
likely to go away anytime soon. This does not mean, however, that
malloc never returns NULL...

--
- David A. Holland
(the above address works if unscrambled but isn't checked often)
Nov 14 '05 #198

P: n/a
E. Robert Tisdale wrote:
Chris Hills wrote:
What C programming isn't embedded?
By which I mean pure C
as opposed to the pseudo C people do with C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.

Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.


I was embedded last night. And again tonight, I hope. :-)
--
Joe Wright mailto:jo********@comcast.net
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Nov 14 '05 #199

P: n/a
On Thu, 09 Sep 2004 13:54:31 -0700 in comp.std.c, "E. Robert Tisdale"
<E.**************@jpl.nasa.gov> wrote:
Chris Hills wrote:
What C programming isn't embedded?
By which I mean pure C
as opposed to the pseudo C people do with C++ compilers.

When you think that anything with electricity applied to it
has a micro these days.
Autos have 50 odd each, washing machines, microwaves,
radios, mobile phones, smart cards missiles etc. etc.
That is a lot of devices that get programmed in C.


Yes, but they represent only a tiny fraction of all C programs.
Embedded [C] programmers represent
a tiny fraction of all C programmers.


Where do you get your statistics on the number of embedded and
non-embedded C programmers?
I don't know either way, but am willing to accept that the number of C
programmers working on embedded devices may now be larger than the
number working on OSes, tools, DBMSes, *[iu]x projects in C nowadays,
as the commercial world seems to have switched from C to newer
languages and tools.

--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Br**********@CSi.com (Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
fake address use address above to reply
Nov 14 '05 #200

233 Replies

This discussion thread is closed

Replies have been disabled for this discussion.