473,836 Members | 2,204 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Is C99 the final C?

I was just thinking about this, specifically wondering if there's any
features that the C specification currently lacks, and which may be
included in some future standardization .

Of course, I speak only of features in the spirit of C; something like
object-orientation, though a nice feature, does not belong in C.
Something like being able to #define a #define would be very handy,
though, e.g:

#define DECLARE_FOO(bar ) #define FOO_bar_SOMETHI NG \
#define FOO_bar_SOMETHI NG_ELSE

I'm not sure whether the features of cpp are even included in the C
standard though (and GCC has definitely taken quite a nonstandard approach
with regards to certain token expansions and whatnot), but that's one area
of improvement I see.

I would also like to see something along the lines of C++ templating,
except without the really kludgy implementation that the C++ folks decided
to go to ( and without the OOP ).

.... Mike pauses for the sound of a thousand *plonks*

Templates save a lot of time when it comes to commonly-used data
structures, and as they are entirely implemented at compile-time and don't
include, by their definition, OOP (although they can be well suited to
it), I think they would be a nice addition and in the spirit of C.

Your thoughts? I'm sure there's some vitriol coming my way but I'm
prepared 8)

--
Mike's Patented Blocklist; compile with gcc:

i=0;o(a){printf ("%u",i>>8*a&25 5);if(a){printf (".");o(--a);}}
main(){do{o(3); puts("");}while (++i);}

Nov 13 '05
193 9663
Paul Hsieh wrote:
Sidney Cadot <si****@jigsaw. nl> wrote:
Sure. Vendors are waiting to see what the C++ people do, because they
are well aware of the unreconcilable conflicts that have arisen. Bjarne
and crew are going to be forced to take the new stuff C99 in the bits and
pieces that don't cause any conflict or aren't otherwise stupid for other
reasons. The Vendors are going to look at this and decide that the
subset of C99 that the C++ people chose will be the least problematic
solution and just go with that.


Ok. I'll give you 10:1 odds; there will be a (near-perfect) C99 compiler
by the end of this decade. A single vendor?!?! Ooooh ... try not to set your standards too high.
One has to be conservative when engaging in bets.
Obviously, its well known that the gnu C++ people are basically converging
towards C99 compliance and are most of the way there already. That's not my
point. My point is that will Sun, Microsoft, Intel, MetroWerks, etc join the
fray so that C99 is ubiquitous to the point of obsoleting all previous C's for
all practical purposes for the majority of developers?
I think they will. Could take a couple of years though.
Maybe the Comeau guy
will join the fray to serve the needs of the "perfect spec compliance" market
that he seems to be interested in.

If not, then projects that have a claim of real portability will never
embrace C99 (like LUA, or Python, or the JPEG reference implementation, for
example.) Even the average developers will forgo the C99 features for fear
that someone will try compile their stuff on an old compiler.
Sure, there'll be market inertia, but this also happened with the
transition of K&R -> ANSI fifteen years ago.
Look, nobody uses K&R-style function declarations anymore. The reason is
because the ANSI standard obsoleted them, and everyone picked up the ANSI
standard. That only happened because *EVERYONE* moved forward and picked up
the ANSI standard. One vendor is irrelevant.
Ok. Can't speak for MW, but I think that by the end of 2007 we'll have
near-perfect C99 compilers from GNU, Sun, Microsoft, and Intel. Odds now
down to at 5:1; you're in?
No, that's not what I am proposing. I am saying that you should not use
structs at all, but you can use the contents of them as a list of comma
seperated entries. With a more beefed up preprocessor one could find the
offset of a packed char array that corresponds to the nth element of the list
as a sum of sizeof()'s and you'd be off to the races.


Perhaps I'm missing something here, but wouldn't it be easier to use the
offsetof() macro? It would be, but only if you have the packed structure mechanism. Other
people have posted indicating that in fact _Packed is more common that I
thought, so perhaps my suggestion is not necessary.
Ok. I agree with you on extending the capabilities of the preprocessor
in general, although I can't come up with actual things I miss in
everyday programming.
C is a language suitable for
and high encouraging of writing extremely unsound and poor code. Fixing it
would require a major overhaul of the language and library.


That's true. I don't quite see how this relates to the preceding
statement though. I'm saying that trying to fix C's intrinsic problems shouldn't start or end
with some kind of resolution of call stack issues. Anyone who understands
machine architecture will not be surprised about call stack depth limitations.
It's the task of a standard to spell these out, I think.
There are far more pressing problems in the language that one would like to
fix.
Yes, but most things that relate to the "encouragin g of writing
extremely unsound and poor code", as you describe C, would be better
fixed by using another language. A lot of the inherently unsafe things
in C are sometimes needed, when doing low-level stuff.
[powerful heap manager]
But a third party library can't do this portably.


I don't see why not?


Explain to me how you implement malloc() in a *multithreaded* environment
portably. You could claim that C doesn't support multithreading, but I highly
doubt your going to convince any vendor that they should shut off their
multithreading support based on this argument.


Now your shifting the goalposts.
By dictating its existence in
the library, it would put the responsibility of making it work right in the
hands of the vendor without affecting the C standards stance on not
acknowledging the need for multithreading.
Obviously, you cannot write a multithreading heap manager portably if
there is no portable standard on multithreading (doh). If you need this
you can always presume POSIX, and be on your way.
Its actual useful functionality that you just can't get from the C
language, and there's no way to reliably map such functionality to the C
language itself. One is forced to know the details of the underlying
platform to implement such things. Its something that really *should* be
in the language.
I disagree. POSIX is for things like this.
Well, it looks to me you're proposing to have a feature-rich heap
manager. I honestly don't see you this couldn't be implemented portably
without platform-specific knowledge. Could you elaborate?


See my multithreading comment above. Also, efficient heaps are usually
written with a flat view of memory in mind. This kind of is impossible in
non-flat memory architectures (like segmented architectures.)


What does the latter remark have to do with C's suitability for doing it?
[...] I want this more for reasons of orthogonality in design than anything
else. You want orthogonality in the C language? You must be joking ...
Not at all.
My proposal allows the programmer to decide what is or is not useful them.

I'm all for that.


Well, I'm a programmer, and I don't care about binary output -- how does your
proposal help me decide what I think is useful to me?


It already did, it seems - You just stated your decision, with regard to
binary output. Fine by me.
[...] . Without being bellingerent: why not use that if you want this
kind of thing?


Well, when I am programming in C++ I will use it. But I'm not going to move
all the way to using C++ just for this single purpose by itself.

I used "%x" as an example of a format specifier that isn't defined ('x'
being a placeholder for any letter that hasn't been taken by the
standard). The statement is that there'd be only 15 about letters left
for this kind of thing (including 'x' by the way -- it's not a hex
specifier). Sorry for the confusion, I should've been clearer.


Well what's wrong with %@, %*, %_, %^, etc?


%* will clash with already legal format specifiers like %*d. All the
others are just plain ugly :-)
A first-class citizen string wouldn't be a pointer; neither would you
necessarily be able to get its address (although you should be able to
get the address of the characters it contains).


But a string has variable length. If you allow strings to be mutable, then
the actual sequence of characters has to be put into some kind of dynamic
storage somewhere. Either way, the base part of the string would in some way
have to be the storable into, say a struct. But you can copy a struct via
memcpy or however. But this then requires a count increment since there is
now an additional copy of the string. So how is memcpy supposed to know that
its contents contain a string that it needs to increase the ref count for?
Similarly, memset needs to know how to *decrease* such a ref count.


It's not very practical, is it..... Hmmmm. Instead of thinking of
perverse ways of circumventing these rather fundamental problems, I'll
just concede the point. Good show :-)
I'm saying that you could have &&&, |||, but just don't defined what they
actually do. Require that the programmer define what they do. C doesn't have
type-specific functions, and if one were to add in operator overloading in a
consistent way, then that would mean that an operator overload would have to
accept only its defined type.


Ok, so the language should have a big bunch of operators, ready for the
taking. Incidentally, Mathematica supports this, if you want it badly.


Hey, its not me -- apparently its people like you who wants more operators.


Just a dozen or so! But _standardized_ ones.

Seriously, though, your "operation introduction" idea is something
different than "operator overloading" alltogether. We should try not to
mix up these two.
My point is that no matter what operators get added to the C language, you'll
never satisfy everyone's appetites. People will just want more and more,
though almost nobody will want all of what could be added.

My solution solves the problem once and for all. You have all the operators
you want, with whatever semantics you want.
That's too much freedom for my taste. If I would want this kind of
thing, I would yank out lex and yacc and code my own language.
For this to be useful without losing the
operators that already exist in C, the right answer is to *ADD* operators. In
fact I would suggest that one simply defined a grammar for such operators, and
allow *ALL* such operators to be definable.


This seems to me a bad idea for a multitude of reasons. First, it would
complicate most stages of the compiler considerably. Second, a
maintenance nightmare ensues: while the standard operators of C are
basically burnt into my soul, I'd have to get used to the Fantasy
Operator Of The Month every time I take on a new project, originally
programmed by someone else. Yes, but if instead of actual operator overloading you only allow redefinition
of these new operators, there will not be any of the *surprise* factor.
I don't know if you've ever experienced the mispleasure of having to
maintain code that's not written by yourself, but it's difficult enough
as it is. Adding new operators might be interesting from a theoretical
perspective, but it surely is a bad idea from a broader software
engineering perspective.
If you see one of these new operators, you can just view it like you view an
unfamilliar function -- you'll look up its definition obviously.
There is an important difference: functions have a "name" that has a
mnemonic function. Operators are just a bunch of pixels with no link to
anything else. It's only by a lot of repetition that you get used to
weird things like '<' and '>'. I don't know about you, But I used to do
Pascal before I switched to C. It took me quite some time before I got
used to "!=".
There's a good reason that we use things like '+' and '*' pervasively,
in many situations; they are short, and easily absorbed in many
contexts. Self-defined operator tokens (consisting, of course, of
'atomic' operators like '+', '=', '<' ...) will lead to unreadable code,
I think; perhaps something akin to a complicated 'sed' script. And allowing people to define their own functions with whatever names they
like doesn't lead to unreadable code? Its just the same thing.
Nope. See above.
What makes your code readable is adherence to an agreed upon coding
standard that exists outside of what the language defines.
There are several such standards for identifier names. No such standard
exists for operator names, except: use familiar ones; preferably, steal
them from other languages. The common denominator of all the identifier
standards is: "use meaningful names". I maintain that there is no
parallel for operators; there's no such thing as a "meaningful "
operator, except when you have been drilled to know their meaning. Your
proposal is in direct collision with this rather important fact of how
the human mind seems to work.
3) because operator overloading is mostly a bad idea, IMHO Well, Bjarne Stroustrup has made a recent impassioned request to *REOVE*
features from C++. Do you have a reference? That's bound to be a fun read, and he probably
missed a few candidates.


It was just in the notes to some meeting Bjarne had in the last year or so to
discuss the next C++ standard. His quote was something like that: while
adding a feature for C++ can have value, removing one would have even more
value. Maybe someone who is following the C++ standardization threads can
find a reference -- I just spent a few minutes on google and couldn't find it.


Ok. I appreciate the effort.
I highly doubt that operator overloading is one that has
been made or would be taken seriously. I.e., I don't think a credible
population of people who have been exposed to it would consider it a bad idea.


I can only speak for myself; I have been exposed, and think it's a bad
idea. When used very sparsely, it has it's uses. However, introducing
new user-definable operators as you propose would be folly; the only way
operator overloading works in practice is if you maintain some sort of
link to the intuitive meaning of an operator. User defined operators
lack this by definition. But so do user definable function names. Yet, functionally they are almost
the same.
"names" refer to (often tangible) objects, whereas "operators" refer to
abstract ideas. I'm no psychologist, but I would guess they could back
up my claim that it's easier for us to handle names than symbols. For
one thing, I have yet to see the first 2-year old that utters "greater
than" as first words.
<<< and @ are nice though. I would be almost in favour of adding them,
were it not for the fact that this would drive C dangerously close in
the direction of APL.


You missed the "etc., etc., etc." part.


In a sense, I truly missed it. Your suggestions were rather interesting! ;-)
I could keep coming up with them
until the cows come home: a! for factorial, a ^< b for "a choose b" (you want
language supposed for this because of overflow concerns of using the direct
definition) <-> a for endian swapping, $% a for the fractional part of a
floating point number, a +>> b for the average (there is another overflow
issue), etc., etc.
Golly! You truly are good at this :-)
Again I wonder, seriously: wouldn't you be better of using C++ ? No because I want *MORE* operators -- not just the ability to redefine the
ones I've got (and therefore lose some.)
Ok. Your opinion on this is quite clear. I disagree for technical
(implementabili ty) and psychological (names versus symbols) reasons. We
could just leave it at that.
[snipped a bit...] I find the idea freaky, yet interesting. I think C is not the place for
this (really, it would be too easy to compete in the IOCCC) but perhaps
in another language... Just to follow your argument for a bit, what
would an "operator definition" declaration look like for, say, the "?<"
min operator in your hypothetical extended C?


This is what I've posted elsewhere:

int _Operator ?< after + (int a, int b) {
if (a > b) return a;
return b;
}


I already saw that and reacted. Will come to that in another post.
Yes I'm sure the same trick works for chars and shorts. So how do you
widen a long long multiply?!?!? What compiler trick are you going to
hope for to capture this? What you show here is just some trivial
*SMALL* multiply, that relies on the whims of the optimizer.


Well, I'd show you, but it's impossible _in principle_. Given that you
are multiplying two expressions of the widest type supported by your
compiler, where would it store the result?


In two values of the widest type -- just like how just about every
microprocessor which has a multiply does it:

high *% low = a * b;


Hmmm. I hate to do this again, but could you provide semantics? Just to
keep things manageable, I'd be happy to see what happens if high, low,
a, and b are any possible combinations of bit-widths and signedness.
Could you clearly define the meaning of this?
PowerPC, Alpha, Itanium, UltraSPARC and AMD64 all have widening multiplies that
take two 64 bit operands and returns a 128 bit result in a pair of 64 bit
operands. They all invest a *LOT* of transistors to do this *ONE* operation.
They all *KNOW* you can't finagle any C/C++ compiler to produce the operation,
yet they still do it -- its *THAT* important (hint: SSL, and therefore *ALL* of
e-commerce, uses it.)
Well, I don't know if these dozen-or-so big-number 'powermod' operations
that are needed to establish an SSL connection are such a big deal as
you make it. Its not me -- its Intel, IBM, Motorola, Sun and AMD who seem to be obsessed
with these instructions.
I don't see them working the committees to get these supported in
non-assembly languages. I guess they're pretty satisfied with the bignum
libs that exist, that provide assembly implementations for all important
platforms (and even a slow fallback for others). The reality is that
no-one seems to care except you, on this.
Of course Amazon, Yahoo and Ebay and most banks are
kind of obsessed with them too, even if they don't know it.
I think you would find that bignum operations are a small part of the
load on e-commerce servers. All RSA-based protocols just do a small
amount of bignum work at session-establishment time to agree to a key
for a shared-secret algorithm.
Many languages exists where this is possible, they are called
"assembly ". There is no way that you could come up with a well-defined
semantics for this.

carry +< var = a + b;


It looks cute, I'll give you that. Could you please provide semantics?
It may be a lot less self evident than you think. How about:

- carry is set to either 1 or 0, depending on whether or not a + b overflows
(just follow the 2s complement rules of one of a or b is negative.)
Hang on, are we talking about "overflow" or "carry" here? These are two
different things with signed numbers.

What happens if a is signed and b is unsigned?
- var is set to the result of the addition; the remainder if a carry occurs.
What happens if the signedness of var, a, and b are not equal?

What happens if the bit-widths of var, a, and b are not equal?
- The whole expression (if you put the whole thing in parenthesese) returns
the result of carry.
..... So this would presume the actual expression is: "+< var = a + b" .
There's no need to introduce a mandatory "carry" variable, then.
In fact, if is were only interested in the carry, I'd be out of luck:
still need the 'var'. That's a bit ugly.

Basically, this is a C-esque syntax for a tuple assignment which
unfortunately is lacking in C:

(carry, value) = a+b
+< would not be an operator in of itself -- the whole syntax is required.
For example: c +< v = a * b would just be a syntax error. The "cuteness" was
stolen from an idea I saw in some ML syntax. Obviously +< - would also be
useful.
I would think you don't need the "c" as well, to make a valid
expression. But I would still need to know what happens with all the
bit-widths and signedness issues.
Did you know that a PowerPC processor doesn't have a shift-right where
you can capture the carry bit in one instruction? Silly but no less true.

What has this got to do with anything? Capturing carries coming out of
shifts don't show up in any significant algorithms that I am aware of
that are significantly faster than using what we have already.


Ah, I see you've never implemented a non-table-driven CRC or a binary
greatest common divisor algorithm.


You can find a binary gcd algorithm that I wrote here:

http://www.pobox.com/~qed/32bprim.c


That's not the "binary GCD algorithm", that's just Knuths version that
avoids modulos. Below is a binary GCD.

unsigned bgcd(unsigned a, unsigned b)
{
unsigned c,e;
if (c=a|b)
{
for(e=0;c%2==0; e++) c/=2;
a>>=e;
b>>=e;

while(a%2==0) a/=2;
while(b%2==0) b/=2;

while (a!=b)
{
if (a<b)
{
b-=a;
do b/=2; while (b%2==0);
}
else
{
a-=b;
do a/=2; while (a%2==0);
}
}
c=a<<e;
}
return c;
}
You will notice how I don't use or care about carries coming out of a right
shift. There wouldn't be enough of a savings to matter.
Check bgcd().
The specific operations I am citing make a *HUGE* difference and have billion
dollar price tags associated with them.


These numbers you made up from thin air, no? otherwise, I'd welcome a
reference.


Widening multpilies cost transistor on the CPU. The hardware algorithms are
variations of your basic public school multiply algorithm -- so it takes n^2
transistors to perform the complete operation, where n is the largest bit
word that the machine accepts for the multiplier. If the multiply were not
widened they could save half of those transistors. So multiply those extra
transistors by the number of CPUs shipped with a widening multipliy (PPC,
x86s, Alphas, UltraSparcs, ... etc) and you easily end up in the billion
dollar range.


This is probably the most elaborate version of "yes, I made these
numbers up from thin air" I've ever came across :-)
I understand the need for the C language standard to be applicable to as
many platforms as possible. But unlike some right shift detail that you
are talking about, the widening multiply hardware actually *IS* deployed
everywhere .


Yup. And it is used too. From machine language.
Sure is. Several good big-number libraries are available that have
processor-dependent machine code to do just this.


And that's the problem. They have to be hand written in assembly. Consider
just the SWOX Gnu multiprecision library. When the Itanium was introduced,
Intel promised that it would be great for e-commerce.


Correction: the Intel marketing department promised that it would be
great for e-commerce.
The problem is that the SWOX guys were having a hard time with IA64 assembly
language (as apparently lots of people are.)
Yes, it's close to a VLIW architecture. Hard to code manually.
So they projected performance results for
the Itanium without having code available to do what they claim. So people
who wanted to consider using an Itanium system based on its performance for
e-commerce were stuck -- they had no code, and had to believe Intel's claims,
or SWOX's as to what the performance would be.
The only thing your example shows is that a marketing angle sometimes
doesn't rhyme well with technical realities.
OTOH, if instead, the C language had exposed a carry propogating add, and a
widening multiply in the language, then it would just be up to the Intel
*compiler* people to figure out how to make sure the widening multiply was
used optimally, and the SWOX/GMP people would just do a recompile for baseline
results at least.


I would guess that Intel, being both a compiler maker and the IA64
manufacturer, could have introduced a macro widemul(hr,lr,a ,b) to do
this, and help the SWOX guys out a bit?

My guess is that they have problems with raw performance and/or compiler
technique. I have some experience with a VLIW compiler, and these things
need a compiler pass to do instruction to execution-pipeline allocation.
This is a very active area of research, and notoriously difficult. My
guess is that there are inherent problems of getting high performance
out of IA64 for this kind of algorithms. VLIW and VLIW-like
architectures can do wonders on high-troughput, low-branching type of
work, but they tend to break down on some very simple algorithms, if
there is a lot of branching.

I don't know SWOX; what do they use for bignum multiplication?
Karatsuba's algorithm?

Best regards, Sidney

Nov 13 '05 #121
Paul Hsieh wrote:
Sidney Cadot <si****@jigsaw. nl> wrote:
Paul Hsieh wrote:
Good point, but something as simple as "lowest precendence" and increasing
in the order in which they are declared seems fine enough. Or maybe
inverted -- just play with those combinations to see what makes sense in
practice. If that's not good enough, then make the precedence level
relative to another operator at the time of declaration. For example:

int _Operator ?< after + (int x, int y) { /* max */
if (x > y) return x;
return y;
}

int _Operator ?> same ?< (int x, int y) { /* min */
if (x < y) return x;
return y;
}


That looks like a decent first stab at a proposed syntax. Some question
though:

- What would be the constraints on acceptable operator names?

As I said it would be defined in some sort of grammar. I am not a
parsable language expert but basically it has to allow for things
like:

<<<, ?<, ===, ^^, |||, &&&, etc

but disallow things like:

+++, ---, ~~, !!, **, etc

due to parsing ambiguities with existing semantics.


I'm sorry, but that's not much of an answer. We are now both stuck, not
knowing whether it is possible even in principle to do this. And the
onus is on you to do so, I would think.
Perhaps as an operator gains unary attributes, tacking things onto the
end of it becomes a syntax error as well.
That statement does not mean a lot without some definitions.
- In what way will you handle the possible introduction of ambiguity in
the parser, that gets to parse the new tokens? By defining a good grammar.
I seriously doubt this is possible even in principle.
- What if I want a ?< to work both on int and on double types? specific type overloading, as is done in C++?
With or without implicit conversions?
- How are you going to define left, right, or lack of associativity?

- does (and if so, how) does your syntax allow the introduction of unary
prefix operators (such as !), binary infix operators that may have
compile-time identifiers as a parameter (such as ->), n-ary operators
(such as the ternary a?b:c or your proposed quaternary carry/add
operator), and operators that exist both in unary and binary form (+, -)? The easiest way to handle these issues is to allow an operator inherit
the attributes of a previously defined operator. So we add in the
"like" clause:

int _Operator ?< after + like + (int x, int y) { /* max */
if (x > y) return x;
return y;
}
What would the rules look like to know if such a thing yields a
non-ambiguous tokenizer/parser. This is just to assess whether it could
be made to work _even in theory_. (That's the domain this idea is bound
to anyway).
I would propose that -> or . not be considered operators as they have
a really different nature to them that one could not map to function
definition semantics in a sensible way.
Why not? What if I want an operator a@^&@b that gives me the address of
the field following a.b in struct a?
So as to the question of
whether or not ? : or the quaternary operators that I proposed -- the
idea would be to first *ADD IN* the quaternary base operators that I
have proposed, then just use the "like" clause as I've described
above.
So we're back to square one: more operators in the core language?
So to reduce ambiguity again, I would also not consider the
dereferencing meaning of * or the address taking meaning of & to be
considered operators either.
That's a pretty arbitrary limit.
int _Operator ?> : after ? : like ? : (int x, int y, int z) {
if (x >= 0) return y;
return z;
}
My gut feeling is that this would effectively force the compiler to
maintain a dynamic parser on-the-fly while scanning through the source,
which would be wildly complex.
Yes it complexifies the parser quite a bit. I don't dispute that.


....Beyond the complexity of a C++ compiler, for one thing. And quite a
bit beyond it as well.
[...] You mentioned that actual languages exist
that do this sort of thing; are they also compiled languages like C, or
are they interpreted languages of the functional variety?

No, actually I am unaware of any language that has infinite operators.
I think APL has a number of operators that could be described as "close
to infinite" ... :-)
Someone mentioned mathematica, but I am unaware of its language
semantics.
That was me. It provides a lot of free operators that you can assign
meaning to (but not an extensible set). But in Matematica, all this is
just syntactic sugar. Your proposal looks moor like syntactic vinegar to
me :-)
I mentioned ML as the inspiration of quaternary operators,
but that's about it.


Pity. That could yield some useful lessons.

Best regards, Sidney

Nov 13 '05 #122
Arthur J. O'Dwyer wrote:
On Fri, 5 Dec 2003, Paul Hsieh wrote:
Sidney Cadot <si****@jigsaw. nl> wrote:
- What would be the constraints on acceptable operator names?
As I said it would be defined in some sort of grammar. I am not a
parsable language expert but basically it has to allow for things
like:

<<<, ?<, ===, ^^, |||, &&&, etc

but disallow things like:

+++, ---, ~~, !!, **, etc

due to parsing ambiguities with existing semantics.

Nitpick: &&& should be in the same category as !! or **, since it
currently also has useful semantics.
The basic rule for "new operator" candidacy is simple: If it ends
with a unary prefix operator, throw it out. If it begins with a
unary postfix operator, throw it out. If it contains the consecutive
characters "/*", "*/", or "//", throw it out. Everything else should
be acceptable, unless I'm missing a case.


Yes, that would take care of tokens, I guess.
- In what way will you handle the possible introduction of ambiguity in
the parser, that gets to parse the new tokens?


By defining a good grammar.

The ambiguity won't be in the parser; it'll be in the lexer.


Until you consider the precedence and associativeness of the new
operator. Surely, this will impact the parser as well.
And the *semantics* of the code will affect the output of the
lexer, if you allow new operators to evolve on the fly. Which
will make this new language practically impossible to implement
using lex/yacc techniques anymore.
On the contrary... In a way, the language Paul proposes already exists:
it's called bison input files, with a C grammer pre-installed.
My gut feeling is that this would effectively force the compiler to
maintain a dynamic parser on-the-fly while scanning through the source,
which would be wildly complex.


Yes it complexifies the parser quite a bit. I don't dispute that.

Not the parser so much as the lexer. Is "a %^&%*!^ b" three tokens,
four, five, six, seven, eight, or nine? It depends on the semantics
of the code we've already translated.


I would say the lexer is the least of the problems. It already has to
maintain dynamic tokens for typedefs, I guess it could handle this as
well. But the dynamic parser.... That would be monster.

Best regards,

Sidney

Nov 13 '05 #123
aj*@nospam.andr ew.cmu.edu says...
My gut feeling is that this would effectively force the compiler to
maintain a dynamic parser on-the-fly while scanning through the source,
which would be wildly complex.


Yes it complexifies the parser quite a bit. I don't dispute that.


Not the parser so much as the lexer. Is "a %^&%*!^ b" three tokens,
four, five, six, seven, eight, or nine? It depends on the semantics
of the code we've already translated.
Note that this is *NOT*, repeat *NOT*, an idea that will ever make
it into the C programming language, for this reason -- as expressed
in this subthread, it would break almost *every* C-parsing tool on
the market.


So would adding &&& and |||. Remember that my premise is that if one is
motivated to add one of those, why not add in something more general?

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/
Nov 13 '05 #124
In article <3F************ ***@yahoo.com>, cb********@yaho o.com says...
Paul Hsieh wrote:
CBFalconer <cb********@yah oo.com> wrote in message news:<3F******* ********@yahoo. com>...
Paul Hsieh wrote:
> Keith Thompson <ks***@mib.or g> wrote:
... snip stuff about multitasking ...
> > That's the same problem you have with any function that returns
> > a string. There are numerous solutions; programmers reinvent
> > them all the time.
>
> Homey saywhat? You have to do the free *after* the printf. Yet
> you still need to account for multitasking. You would have to
> LOCK before the printf, then FREEALL / UNLOCK after the printf
> just to make it work. Which can have a massive performance impact.

Nonsense. printf() simply has to make a private copy of its data
before returning. This is much easier in languages that use
references. Systems have been managing buffers for some time now.


Excuse me, but someone has to *free* the memory. Explain how this is
done without some kind of lock or handle grabbing operation prior to
the printf (in which case you might as well do you own seperate printf
for each -> string operation, and free each result by hand) and
without modifying printf.


printf (or anything else) secures the necessary memory, copies
things into it, and can now return to its caller while letting a
separate task process the data. When done, it releases that
memory.


Which part of "without modifying printf" did you miss? It own output string is
not at issue. What's at issue is that you have just created a function which
returns an string. If the allocation is static, then you have multitasking/re-
entrancy problem. If its dynamic then you have a memory leak issue to contend
with. If you use a ref counting system, you have to same problem as the memory
leak issue -- who decrements the count when printf is done?

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/
Nov 13 '05 #125
Paul Hsieh wrote:
aj*@nospam.andr ew.cmu.edu says...
My gut feeling is that this would effectively force the compiler to
maintain a dynamic parser on-the-fly while scanning through the source,
which would be wildly complex.

Yes it complexifies the parser quite a bit. I don't dispute that.


Not the parser so much as the lexer. Is "a %^&%*!^ b" three tokens,
four, five, six, seven, eight, or nine? It depends on the semantics
of the code we've already translated.
Note that this is *NOT*, repeat *NOT*, an idea that will ever make
it into the C programming language, for this reason -- as expressed
in this subthread, it would break almost *every* C-parsing tool on
the market.

So would adding &&& and |||. Remember that my premise is that if one is
motivated to add one of those, why not add in something more general?


Bad comparison. Support for &&& and ||| would be infinitely easier to
add than support for what you propose.

Regards,

Sidney

Nov 13 '05 #126
Paul Hsieh wrote:
CBFalconer <cb********@yah oo.com> wroe:

.... snip ...

printf (or anything else) secures the necessary memory, copies
things into it, and can now return to its caller while letting a
separate task process the data. When done, it releases that
memory.


Which part of "without modifying printf" did you miss? It own
output string is not at issue. What's at issue is that you have
just created a function which returns an string. If the allocation
is static, then you have multitasking/re- entrancy problem. If its
dynamic then you have a memory leak issue to contend with. If you
use a ref counting system, you have to same problem as the memory
leak issue -- who decrements the count when printf is done?


printf etc are system procedures. If we are building them we
should be free to make them work, or to use suitable lower level
functions that are called by it. ISO C is not a multitasking
system, so to run such programs correctly in a multi-tasking
system requires that the various interfaces be carefully and
correctly designed.

One of the principles to be observed is that data storage not be
released before it is used.

You seem to be arguing about nothing at all.

--
Chuck F (cb********@yah oo.com) (cb********@wor ldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home .att.net> USE worldnet address!
Nov 13 '05 #127
In article <bq**********@n ews.tudelft.nl> , si****@jigsaw.n l says...
Paul Hsieh wrote:
Look, nobody uses K&R-style function declarations anymore. The reason is
because the ANSI standard obsoleted them, and everyone picked up the ANSI
standard. That only happened because *EVERYONE* moved forward and picked up
the ANSI standard. One vendor is irrelevant.
Ok. Can't speak for MW, but I think that by the end of 2007 we'll have
near-perfect C99 compilers from GNU, Sun, Microsoft, and Intel. Odds now
down to at 5:1; you're in?


With the *and* in there? Sure. Since Microsoft alone will not do it
(otherwise why not back port MFC to C?), the GNU people may decide that the
last 10% just isn't worth it, Sun has two other languages to worry about that
take precidence in terms of development resources, and once C++0x emerges, C99
development by any of these vendors will almost certainly be halted. I'd have
to be wrong on all of these guys to lose this.
Ok. I agree with you on extending the capabilities of the preprocessor
in general, although I can't come up with actual things I miss in
everyday programming.
I run into this every now and then. For example, I was recently trying to
solve the following problem: Create a directed graph on n points, each with an
out degree of d (I am concerned with very small d, like 2), such that the path
length between any pair of points is minimized.

Turns out that this is a problem whose complexity grows super-exponentially
with very small values of n. Despite my best efforts, I don't know the answer
for n=11, d=2 for example (I know its either 3 or 4, but I can't prove that its
not 3). More startling is the possibility that n=12,d=2 might have a smaller
latency (but I don't know that either)!

Anyhow, the point is that the only way to have a chance to squeeze enough
computational juice out of my PC to solve this, I had to hard code huge amounts
of the code, and use a lot of bit twiddling tricks just for each special case.
I used the preprocessor macros as much as possible to make my code manageable,
but in order to change n, I actually have to *modify* the code by adding and
changing *n* lines of code. There's not much I can do about this, I would have
no chance to solve this otherwise.

With a more powerful preprocessor, or a code generator, I could actually make
this so that I could modify one #define, or even possibly make it run time
settable (though this would make the code much larger.)
I'm saying that trying to fix C's intrinsic problems shouldn't start or end
with some kind of resolution of call stack issues. Anyone who understands
machine architecture will not be surprised about call stack depth limitations.


It's the task of a standard to spell these out, I think.
There are far more pressing problems in the language that one would like to
fix.


Yes, but most things that relate to the "encouragin g of writing
extremely unsound and poor code", as you describe C, would be better
fixed by using another language. A lot of the inherently unsafe things
in C are sometimes needed, when doing low-level stuff.


Why is UB in isgraph(-1) needed? Why is UB in gets() needed? Why is the fact
that fgets() skips over '\0' characters needed? Why is a non-portable right
shift on signed integers needed (especially considering the one on unsigned
*is* portable)?
[powerful heap manager]
But a third party library can't do this portably.

I don't see why not?


Explain to me how you implement malloc() in a *multithreaded* environment
portably. You could claim that C doesn't support multithreading, but I highly
doubt your going to convince any vendor that they should shut off their
multithreading support based on this argument.


Now your shifting the goalposts.


How so? The two are related. Writing a heap manager requires that you are
aware of multitasking considerations. If I want to extend the heap manager, I
have to solve them each one by one in different ways on different platforms.
And, of course, what sort of expectation of portability would an end user have
knowing that the library itself had to be carefully hand ported?

Compare this to the string library that I wrote (http://bstring.sf.net/). Its
totally portable. Although I don't have access to MAC OS X, and don't use gnu
C on Linux, in order to test to make sure, I know for a fact that end users
from both platforms have downloaded and have/are actively using it. I know it
works in 16 bit and 32 bit DOS/Windows environments, etc., etc. Its totally
portable, in the realworld sense of portable (semantically, as well as
syntactically.) The point is that the end users know that there is absolutely
0 risk of losing portability or having porting problems because of the use of
this library.

Third party tools for advanced heaps makes little sense. It would only be
worth consideration if it were actively ported to many platforms -- which
increases its cost. I.e., for whatever set of platforms I am considering using
such a library, I am paying for the development cost of every playform that it
would be ported to in the price of using it. Its also highly useless for new
hardware platforms which are in development. Even having access to the source
is of lesser value if there are platform specifics in each instance that
requires munging just to port it.

The point is, if the C standard were simply to add this functionality straight
into the library, then it would be each compiler vendor's responsibility to add
this functionality into the compiler. And the functionality would be
inherently portable as a result. The requirement of multitasking support would
then be pushed back into the vendors lap -- i.e., the people who have
introduced this non-portable feature in the first place.
By dictating its existence in
the library, it would put the responsibility of making it work right in the
hands of the vendor without affecting the C standards stance on not
acknowledging the need for multithreading.


Obviously, you cannot write a multithreading heap manager portably if
there is no portable standard on multithreading (doh). If you need this
you can always presume POSIX, and be on your way.


You do not understand. The *IMPLEMENTATION * needs to be multithreading aware.
From a programmer's point of view, the multithreading support is completely
transparent. This has no end user impact from a POSIX specification point of
view.
Its actual useful functionality that you just can't get from the C
language, and there's no way to reliably map such functionality to the C
language itself. One is forced to know the details of the underlying
platform to implement such things. Its something that really *should* be
in the language.
I disagree. POSIX is for things like this.
Well, it looks to me you're proposing to have a feature-rich heap
manager. I honestly don't see you this couldn't be implemented portably
without platform-specific knowledge. Could you elaborate?


See my multithreading comment above. Also, efficient heaps are usually
written with a flat view of memory in mind. This kind of is impossible in
non-flat memory architectures (like segmented architectures.)


What does the latter remark have to do with C's suitability for doing it?


I am saying that I don't disagree with you, but I think you are missing the
point. By simply adding the features/function into C, that would make it
defacto portable from the point of view that matters -- programmers of the C
language.

For most flat-memory architectures, its actually very straightforward to add in
all the features that I am requesting. I know this because I've written my own
heap manager, which of course uses platform specific behaviour for the one
platform I am interested in. (It only gets complicated for platforms with
unusual memory, like segmented architectures, which have correspondingly
complicated heap managers today.) This is in stark contrast with the
incredibly high bar of platform-specific complications set by trying to do this
outside of the language.
My proposal allows the programmer to decide what is or is not useful them.
I'm all for that.


Well, I'm a programmer, and I don't care about binary output -- how does your
proposal help me decide what I think is useful to me?


It already did, it seems - You just stated your decision, with regard to
binary output. Fine by me.


Not fine by me. Because some other programmer who's code I have to look at
will use it, and I won't have any idea what it is.
I'm saying that you could have &&&, |||, but just don't defined what they
actually do. Require that the programmer define what they do. C doesn't have
type-specific functions, and if one were to add in operator overloading in a
consistent way, then that would mean that an operator overload would have to
accept only its defined type.

Ok, so the language should have a big bunch of operators, ready for the
taking. Incidentally, Mathematica supports this, if you want it badly.


Hey, its not me -- apparently its people like you who wants more operators.


Just a dozen or so! But _standardized_ ones.


And you don't see the littlest problem with this proposal? If you add a dozen,
you'd better be sure that they are last dozen that anyone could possibly want
to add to the language. Their value add would have be worth the pain of having
everyone learn about 12 new symbols.
My point is that no matter what operators get added to the C language, you'll
never satisfy everyone's appetites. People will just want more and more,
though almost nobody will want all of what could be added.

My solution solves the problem once and for all. You have all the operators
you want, with whatever semantics you want.


That's too much freedom for my taste. If I would want this kind of
thing, I would yank out lex and yacc and code my own language.


Well how is that different from adding just &&& and |||? If you *REALLY
REALLY* want them, then why don't you yank out lex and yacc and code up a new
language?
I don't know if you've ever experienced the mispleasure of having to
maintain code that's not written by yourself, but it's difficult enough
as it is.
Do it all the time.
[...] Adding new operators might be interesting from a theoretical
perspective, but it surely is a bad idea from a broader software
engineering perspective.
Its no worse than trying to add in &&& or ||| today.
If you see one of these new operators, you can just view it like you view an
unfamilliar function -- you'll look up its definition obviously.


There is an important difference: functions have a "name" that has a
mnemonic function.


But the name may be misleading -- as is the case, more often than not, just
reflecting the thought of the original instance by the original programmer who
maybe cut and paste it from somewhere else.
[...] Operators are just a bunch of pixels with no link to
anything else. It's only by a lot of repetition that you get used to
weird things like '<' and '>'. I don't know about you, But I used to do
Pascal before I switched to C. It took me quite some time before I got
used to "!=".
And how long will it take the rest of us to get used to your weird &&& or |||
operators?
What makes your code readable is adherence to an agreed upon coding
> standard that exists outside of what the language defines.


There are several such standards for identifier names. No such standard
exists for operator names, except: use familiar ones; preferably, steal
them from other languages.


Sounds like a reasonable convention to me. How about: All new operators must
be defined in a central module named ----. Or: Only these new operators may be
added as defined by ... yada, yada, yada. The coding standards are just
different.
[...] The common denominator of all the identifier
standards is: "use meaningful names". I maintain that there is no
parallel for operators; there's no such thing as a "meaningful "
operator, except when you have been drilled to know their meaning. Your
proposal is in direct collision with this rather important fact of how
the human mnd seems to work.
Just like freeform variable names, there is the same incumberance of the
programmers managing the meaning of the symbols for more generic operators.
You have not made a sufficient case to convince me that there is a real
difference between the two.
<<< and @ are nice though.
BTW -- odd how you actually thought that these were "nice". Do you think you
would have a hard to remembering them? Would it wrankle on your brain because
they were odd and unfamilliar operators that are new to the language? Would
you have a tendancy to write abusive code because of the existence of these new
operators?

Do you think perhaps its possible to use an arbitrarily extendable operator
mechanism in order to *clarify* or make code actually more maintainable?
Yes I'm sure the same trick works for chars and shorts. So how do you
widen a long long multiply?!?!? What compiler trick are you going to
hope for to capture this? What you show here is just some trivial
*SMALL* multiply, that relies on the whims of the optimizer.

Well, I'd show you, but it's impossible _in principle_. Given that you
are multiplying two expressions of the widest type supported by your
compiler, where would it store the result?


In two values of the widest type -- just like how just about every
microprocessor which has a multiply does it:

high *% low = a * b;


Hmmm. I hate to do this again, but could you provide semantics? Just to
keep things manageable, I'd be happy to see what happens if high, low,
a, and b are any possible combinations of bit-widths and signedness.
Could you clearly define the meaning of this?


a, b, high and low must be integers. The signedness of the result of a * b (as
if it were not widened) dictates the result signedness. Coercion will happen
as necessary when storing to high and low. The whole expression will have a
side effect of returning high.
Well, I don't know if these dozen-or-so big-number 'powermod' operations
that are needed to establish an SSL connection are such a big deal as
you make it.
Its not me -- its Intel, IBM, Motorola, Sun and AMD who seem to be obsessed
with these instructions.


I don't see them working the committees to get these supported in
non-assembly languages.


That's because they don't care about how the it goes down once it hits
software. Someone has to pay something for a platform specific library? -- who
cares, as long as they sell their hardware. These same people didn't get
together to define the C standard did they? Why would they bother with an
extension just for widening multiplies?

Instead the hardware people waste their time on trivial little specifications
like IEEE-754, which the C standards idiots don't bother to look at until 15
years later.
[...] I guess they're pretty satisfied with the bignum
libs that exist, that provide assembly implementations for all important
platforms (and even a slow fallback for others). The reality is that
no-one seems to care except you, on this.
The hardware people care that it exists, and not about the form of its
existence, so long as it gets used. For anyone who wants to *USE* this
functionality, though, they are stuck with assembly, or third party libraries.
Of course Amazon, Yahoo and Ebay and most banks are
kind of obsessed with them too, even if they don't know it.


I think you would find that bignum operations are a small part of the
load on e-commerce servers.


According to a paper by Intel, widening multiply accounts for something like
30% of the load on typical e-commerce transactions (typing in your credit card
over the net in a way that can't be snooped.) One single assembly instruction
(one *bundle* on Itanium) holds a whole server down for 30% of its computation,
versus the total 100K line e-commerce software required to the do the rest.
That's why HW manufacturers are keen on the number of transistors they spend
making this one opeartion reasonably fast.
[...] All RSA-based protocols just do a small
amount of bignum work at session-establishment time to agree to a key
for a shared-secret algorithm.
This is only useful for much larger secure transations like ssh, or an
encrypted phone call or something. E-commerce is a much smaller, one shot
transaction, where the RSA computation dominates.
>Many languages exists where this is possible, they are called
>"assembly ". There is no way that you could come up with a well-defined
>semantics for this.

carry +< var = a + b;

It looks cute, I'll give you that. Could you please provide semantics?
It may be a lot less self evident than you think.
How about:

- carry is set to either 1 or 0, depending on whether or not a + b overflows
(just follow the 2s complement rules of one of a or b is negative.)


Hang on, are we talking about "overflow" or "carry" here? These are two
different things with signed numbers.

What happens if a is signed and b is unsigned?


My intend was for the operation to follow the semantics of the x86 ADC assembly
instruction. The point is that this instruction is known to be proper for
doing correct bignum additions.
- var is set to the result of the addition; the remainder if a carry occurs.


What happens if the signedness of var, a, and b are not equal?


It just behaves like the ADC x86 assembly instruction, the details of which I
will not regurgitate here.
What happens if the bit-widths of var, a, and b are not equal?
The bit-widths would be converted as if the (a + b) operation were happening in
isolation, to match C language semantics.
- The whole expression (if you put the whole thing in parenthesese) returns
the result of carry.


.... So this would presume the actual expression is: "+< var = a + b" .
There's no need to introduce a mandatory "carry" variable, then.


True. Is this a problem? Perhaps you would like it to return var as a side
effect instead to avoid this redundancy? I don't have that strong of a feeling
on it.
In fact, if is were only interested in the carry, I'd be out of luck:
still need the 'var'. That's a bit ugly.
You could just omit it as a degenerate form:

+< = a + b
Basically, this is a C-esque syntax for a tuple assignment which
unfortunately is lacking in C:

(carry, value) = a+b
Yeah see, but the problem is that this encompasses existing C-language forms.
For all I know this might be legal C syntax already (I wouldn't know, I just
don't use the "," operator in this way) in which case we are kind of already
dead with backwards compatibility. There's also nothing in that syntax to
indicate some new thing was happening that is capturing the carry.
Ah, I see you've never implemented a non-table-driven CRC or a binary
greatest common divisor algorithm.


You can find a binary gcd algorithm that I wrote here:

http://www.pobox.com/~qed/32bprim.c


That's not the "binary GCD algorithm", that's just Knuths version that
avoids modulos. Below is a binary GCD.


Sorry, a previous version that I never put out on the web used the binary
algorithm. I tested Knuths as much faster and thus updated it, and forgot that
I had done this.
The specific operations I am citing make a *HUGE* difference and have billion
dollar price tags associated with them.

These numbers you made up from thin air, no? otherwise, I'd welcome a
reference.


Widening multpilies cost transistor on the CPU. The hardware algorithms are
variations of your basic public school multiply algorithm -- so it takes n^2
transistors to perform the complete operation, where n is the largest bit
word that the machine accepts for the multiplier. If the multiply were not
widened they could save half of those transistors. So multiply those extra
transistors by the number of CPUs shipped with a widening multipliy (PPC,
x86s, Alphas, UltraSparcs, ... etc) and you easily end up in the billion
dollar range.


This is probably the most elaborate version of "yes, I made these
numbers up from thin air" I've ever came across :-)


But I didn't. I used to work with one of these companies. People spend time
and consideration on this one instruction. You could just feel the impact that
this one instruction was going to have, and the considerations for the cost of
its implementation. I could easily see a quarter million just in design
effort, then some quarter million in testing, not to mention the cost of the
extra die area once it shipped -- and this is inside of *ONE* of these
companies, for *ONE* chip generation.

For example, inside of Intel, they decided that they were going to reuse their
floating point multiplier for their widening integer add for Itanium. But that
meant that the multiplier had to be able to do 128 bit multiplies (as opposed
to 82 bits, which is all the Itanium would have apparently needed) and
couldn't run a floating point and integer multiply at the same time. This has
non-trivial layout and design impact on the chip.
I understand the need for the C language standard to be applicable to as
many platforms as possible. But unlike some right shift detail that you
are talking about, the widening multiply hardware actually *IS* deployed
everywhere .
Yup. And it is used too. From machine language.


And *only* machine language. That's the point.
Sure is. Several good big-number libraries are available that have
processor-dependent machine code to do just this.


And that's the problem. They have to be hand written in assembly. Consider
just the SWOX Gnu multiprecision library. When the Itanium was introduced,
Intel promised that it would be great for e-commerce.


Correction: the Intel marketing department promised that it would be
great for e-commerce.


I'll be sure to go track down the guy who gave an 2-hour long presentation
showing the guts of a clever 56/64 bit carry avoiding bignum multiply algorithm
that Intel was pushing for Itanium and SSE-2, that he's really just a marketing
guy. Intel's claims were real -- they had working code in-house.
So they projected performance results for
the Itanium without having code available to do what they claim. So people
who wanted to consider using an Itanium system based on its performance for
e-commerce were stuck -- they had no code, and had to believe Intel's claims,
or SWOX's as to what the performance would be.


The only thing your example shows is that a marketing angle sometimes
doesn't rhyme well with technical realities.


No, it shows that without having a proper path, technology can be bogged down
by inane weaknesses in standards. Intel had its *C* compilers ready to go for
the Itanium *LONG* before any of this happened. Even to this day, we are
*STILL* waiting for the code to make it into GMP:

http://www.swox.com/gmp/gmp-speed.html

The grey bars indicate where they *think* the performance will be (because they
can't get their hands on the platform, or because they are still hacking on the
code) and the pink bars are actual delivered performance. So is Itanium really
fast or really slow at this? From that chart its impossible to tell for sure.
OTOH, if instead, the C language had exposed a carry propogating add, and a
widening multiply in the language, then it would just be up to the Intel
*compiler* people to figure out how to make sure the widening multiply was
used optimally, and the SWOX/GMP people would just do a recompile for baseline
results at least.


I would guess that Intel, being both a compiler maker and the IA64
manufacturer, could have introduced a macro widemul(hr,lr,a ,b) to do
this, and help the SWOX guys out a bit?


They could have. But what kind of relationship do you think a proprietary
company like Intel has with a bunch of GPL geeks?
I don't know SWOX; what do they use for bignum multiplication?
Karatsuba's algorithm?


I think they have an option for that. But from my recollection of having
looked into this, by the time Karatsuba is useful, more advanced methods like
Toom Cook or straight to FFTs become applicable as well.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/
Nov 13 '05 #128

On Sat, 6 Dec 2003, Sidney Cadot wrote:

Paul Hsieh wrote:
aj*@nospam.andr ew.cmu.edu says...
Not the parser so much as the lexer. Is "a %^&%*!^ b" three tokens,
four, five, six, seven, eight, or nine? It depends on the semantics
of the code we've already translated.
Note that this is *NOT*, repeat *NOT*, an idea that will ever make
it into the C programming language, for this reason -- as expressed
in this subthread, it would break almost *every* C-parsing tool on
the market.


So would adding &&& and |||. Remember that my premise is that if one is
motivated to add one of those, why not add in something more general?


Bad comparison. Support for &&& and ||| would be infinitely easier to
add than support for what you propose.


Good comparison. Support for &&& and ||| is exactly as likely to
break existing tools, and is exactly as likely to make it into a
future version of C. [There may not be a firm cause-and-effect there,
but it is correlated.]

Remember *my* premise: *If* one were motivated to add one of those,
why not add in wings to pigs? :-)

-Arthur
Nov 13 '05 #129
Arthur J. O'Dwyer wrote:
On Sat, 6 Dec 2003, Sidney Cadot wrote:
Paul Hsieh wrote:
aj*@nospam.a ndrew.cmu.edu says...
[...]

So would adding &&& and |||. Remember that my premise is that if one is
motivated to add one of those, why not add in something more general?
Bad comparison. Support for &&& and ||| would be infinitely easier to
add than support for what you propose.

Good comparison. Support for &&& and ||| is exactly as likely to
break existing tools, and is exactly as likely to make it into a
future version of C. [There may not be a firm cause-and-effect there,
but it is correlated.]
Yes, &&& and ||| would break existing tools, but there is precedent for
that (introduction of // comments). The not unimportant difference is
that adding support for &&& and ||| would be rather trivial, while
support for Paul's proposal is a complete nightmare.

As to the likelihood of either feature making it into the C standard: I
disagree. The chances of ||| and &&& being added is many orders of
magnitudes greater than that of the the "operator introduction" feature
championed by Paul. Still very close to zero, of course. One has to
approach probabilities of this magnitude logarthmically :-)
Remember *my* premise: *If* one were motivated to add one of those,
why not add in wings to pigs? :-)


This did not go down well at all at sci.bio.genetic-engineering.pig s. It
seems that lack of vision is not limited to c.l.c ;-)
Best regards,

Sidney

Nov 13 '05 #130

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
3455
by: Anthony Martin | last post by:
I've been reading the Java Language Specification, and in Chapter 16 there's an interesting topic called Definite Assignment. http://tinyurl.com/3fqk8 I'm wondering about the idea of "Deferred Final Automatic Variables" like the following: void unflow(boolean flag) { final int k;
14
23149
by: Medi Montaseri | last post by:
Hi, I think my problem is indeed "how to implement something like java's final in C++" The long version.... I have an abstract base class which is inherited by several concrete classes. I have a group of methods that I'd like to implement in the base class
48
8720
by: David J Patrick | last post by:
I'm trying to rewrite the CSS used in http://s92415866.onlinehome.us/files/ScreenplayCSSv2.html. using the w3.org paged media standards as described at http://www.w3.org/TR/REC-CSS2/page.html The ScreenplayCSS is flawed, for several reasons; -overuse of <div id= tags -doesn't scale screen resolutions (convert from px to in, pt ?) -no media="print" (how much coule be shared between "screen" & "print") -no automatic page breaks (with...
10
5123
by: Bezalel Bareli | last post by:
I know I have seen some threads on the subject long time ago and it was using a virtual base class ... in short, what is the nicest way to implement the Java final class in c++ Thanks.
14
1777
by: My4thPersonality | last post by:
Has the fact that both Java and C# are garbage collected, and C++ in not, anything to do with the fact that there is no language item to prevent a class from being inherired from? I once read that Java and C# implement this feature for preformance, but the C++ creators said it was not worse the effort. So because Java and C# are garbage collected, in their case is it worse the effort? What is the connection?
1
8624
by: silverburgh.meryl | last post by:
I am trying to convert this code from java to c++: public final class Type { public static final int DEFAULT = 1; private static int index = 2; public static final int COLUMN1 = (int) Math.pow(2, index++); public static final int COLUMN2 = (int) Math.pow(2, index++); public static final int COLUMN3 = (int) Math.pow(2, index++); public static final int COLUMN4 = (int) Math.pow(2, index++);
5
1399
by: Anthony Baxter | last post by:
On behalf of the Python development team and the Python community, I'm happy to announce the release of Python 2.4.3 (final). Python 2.4.3 is a bug-fix release. See the release notes at the website (also available as Misc/NEWS in the source distribution) for details of the more than 50 bugs squished in this release, including a number found by the Coverity Scan project. Assuming no major bugs pop up, the next release of Python will be...
14
2997
by: Rahul | last post by:
Hi Everyone, I was searching for final class in c++, and i came across few links which suggests to have the constructor of the, to be final class, as private so that any derived class's constructors can't access the same. class C { private:
1
1709
by: Rajib | last post by:
Not that this serves any real purpose, but gcc allows me to do some hack like this: class hide_A { public: class A { public: virtual int final() { return 42; } };
0
9810
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, weíll explore What is ONU, What Is Router, ONU & Routerís main usage, and What is the difference between ONU and Router. Letís take a closer look ! Part I. Meaning of...
0
9656
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10527
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10573
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10241
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
1
7773
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Duprť who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5642
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5812
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4443
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.