473,889 Members | 1,714 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Is C99 the final C?

I was just thinking about this, specifically wondering if there's any
features that the C specification currently lacks, and which may be
included in some future standardization .

Of course, I speak only of features in the spirit of C; something like
object-orientation, though a nice feature, does not belong in C.
Something like being able to #define a #define would be very handy,
though, e.g:

#define DECLARE_FOO(bar ) #define FOO_bar_SOMETHI NG \
#define FOO_bar_SOMETHI NG_ELSE

I'm not sure whether the features of cpp are even included in the C
standard though (and GCC has definitely taken quite a nonstandard approach
with regards to certain token expansions and whatnot), but that's one area
of improvement I see.

I would also like to see something along the lines of C++ templating,
except without the really kludgy implementation that the C++ folks decided
to go to ( and without the OOP ).

.... Mike pauses for the sound of a thousand *plonks*

Templates save a lot of time when it comes to commonly-used data
structures, and as they are entirely implemented at compile-time and don't
include, by their definition, OOP (although they can be well suited to
it), I think they would be a nice addition and in the spirit of C.

Your thoughts? I'm sure there's some vitriol coming my way but I'm
prepared 8)

--
Mike's Patented Blocklist; compile with gcc:

i=0;o(a){printf ("%u",i>>8*a&25 5);if(a){printf (".");o(--a);}}
main(){do{o(3); puts("");}while (++i);}

Nov 13 '05
193 9701
Paul Hsieh wrote:
Sidney Cadot <si****@jigsaw. nl> wrote:
.... snip ...
* upgraded status of enum types (they are currently quite
interchangeable with ints); deprecation of implicit casts from
int to enum (perhaps supported by a mandatory compiler warning).


I agree. Enums, as far as I can tell, are almost useless from a
compiler assisted code integrity point of view because of the
automatic coercion between ints and enums. Its almost not worth the
bothering to ever using an enum for any reason because of it.


On the contrary they are extremely useful in defining a series of
constants that must follow each other, and yet allow easy
revision. Compare:

#define one 1
#define two (one + 1)
#define three (two + 1)
....
#define last (something + 1)

with

enum (one = 1, two, three, .... last};

and compare the effort (and potential error) of injecting
twoandahalf in each.

--
Chuck F (cb********@yah oo.com) (cb********@wor ldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home .att.net> USE worldnet address!
Nov 13 '05 #91
pete wrote:
Paul Hsieh wrote:
spell the final death nail


I think you mean "sound the final death knell"


You have pounded home the final nail while watching him writhe in
his final death throes. All of which is finally fine with me.
Sedulously eschew sadistic obfuscation. :-)

--
Chuck F (cb********@yah oo.com) (cb********@wor ldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home .att.net> USE worldnet address!
Nov 13 '05 #92
On Sun, 30 Nov 2003, Sidney Cadot wrote:
I think C99 has come a long way to fix the most obvious problems in C89
(and its predecessors). I for one would be happy if more compilers would
fully start to support C99, It will be a good day when I can actually
start to use many of the new features without having to worry about
portability too much, as is the current situation.

"Arthur J. O'Dwyer" <aj*@nospam.and rew.cmu.edu> wrote: Agreed. Although besides the mixing of declarations and statements,
I can't think of any C99 features I *use* that are lacking in C90.


Hmm... in C99 I can do

char symbols [128] = { ['*'] = 1, ['/'] = 2, ['+'] = 3, ['-'] = 4, ... };

or

struct
{
char *expr;
int value;
} lexer ['z' - 'a'] = { ['b' - 'a'] = { "[a-zA-Z]", 0 }, ... };

or

struct flag
{
bool one;
bool other;
};

struct flag initFlag =
{
.one = false,
.other = true
};
(at global scope)

or

FILE *fp;
#define PRINTHIS(...) fprintf (fp, __VA_ARGS__)

or

#define varIF(...) if (__VA_ARGS__) {

or

#define VERIFY(...) { int table[] = { __VA_ARGS__, 1 }; printf("%d",tab le);

what's the equivalent in C89?

Nov 13 '05 #93
In message <ln************ @nuthaus.mib.or g>
Keith Thompson <ks***@mib.or g> wrote:
One potential problem (assume 4-byte ints, normally requiring 4-byte
alignment):

_Packed struct { /* or whatever syntax you like */
char c; /* offset 0, size 1 */
int i; /* offset 1, size 4 */
} packed_obj;

You can't sensible take the address of packed_obj.i. A function that
takes an "int*" argument will likely die if you give it a misaligned
pointer (unless you want to allow _Packed as an attribute for function
arguments). The simplest approach would be to forbid taking the
address of a member of a packed structure (think of the members as fat
bit fields).


The simplest way is to also treat _Packed as a type qualifier, much like
const.

&packed_obj. i would be of type _Packed int *; it wouldn't be possible
to assign it to an int *. For CPUs which don't allow unaligned access, the
concept of a _Packed (ie unaligned) pointer is useful, probably more so than
unaligned structures, as it allows the C programmer to easily read an
unaligned value from any raw, unaligned data.

That's the way the _Packed implementation I've seen works.

The one wrinkle is that the _Packed-ness has to attach to the struct tag in
that declaration, to ensure struct type compatibility. Not an issue in your
example, but it is if the struct was named. Also all sub-structures of
_Packed structures must themselves be _Packed.

I would be in favour of standardising _Packed. Even if you didn't totally
standardise _Packed structure layout, the standardisation of the actual
syntax and type rules would be worthwhile.

--
Kevin Bracey, Principal Software Engineer
Tematic Ltd Tel: +44 (0) 1223 503464
182-190 Newmarket Road Fax: +44 (0) 1223 503458
Cambridge, CB5 8HE, United Kingdom WWW: http://www.tematic.com/
Nov 13 '05 #94
In message <8b************ **************@ posting.google. com>
ku****@wizard.n et (James Kuyper) wrote:
"Arthur J. O'Dwyer" <aj*@nospam.and rew.cmu.edu> wrote in message news:<Pi******* *************** ************@un ix44.andrew.cmu .edu>...
...
is easier for me to grasp. And I think (but this is just IMH
and uneducated O) that C++ compilers need a lot of extra baggage
to deal with constant "consts," and I wouldn't want to foist all
that on C compilers.


I'm curious - why do you think that? I don't know that you're wrong,
but I can't think of any reason why it would be a significant cost.


Nah, it's dead cheap. Only about 15 lines in my C++ compiler. You just need
an extra field in your "Variable" objects containing the "known" constant
value from the initialiser, if any, and you substitute that in every time the
variable's value is referenced.

It would be just as easy to do this as an optimisation for a C compiler as
well, if it weren't for the requirement to still flag up the non-constant
constraint violations. Also, tentative definitions slightly confuse things,
in that a const int may become known later.

--
Kevin Bracey, Principal Software Engineer
Tematic Ltd Tel: +44 (0) 1223 503464
182-190 Newmarket Road Fax: +44 (0) 1223 503458
Cambridge, CB5 8HE, United Kingdom WWW: http://www.tematic.com/
Nov 13 '05 #95

On Wed, 3 Dec 2003, Lorenzo Villari wrote:

On Sun, 30 Nov 2003, Sidney Cadot wrote:
I think C99 has come a long way to fix the most obvious problems in C89
(and its predecessors). I for one would be happy if more compilers would
fully start to support C99, It will be a good day when I can actually
start to use many of the new features without having to worry about
portability too much, as is the current situation.

"Arthur J. O'Dwyer" <aj*@nospam.and rew.cmu.edu> wrote:
Agreed. Although besides the mixing of declarations and statements,
I can't think of any C99 features I *use* that are lacking in C90.


Hmm... in C99 I can do


[and then at the end asks] what's the equivalent in C89? char symbols [128] = { ['*'] = 1, ['/'] = 2, ['+'] = 3, ['-'] = 4, ... };
This is reasonable; it's just not something I do a lot.

char symbols[128] = {0};
symbols['*'] = 1;
symbols['/'] = 2;
symbols['+'] = 3;
...
or
char ops[] = {'*', '/', '+', '-', ...};
char symbols[128] = {0};
int i;
for (i=0; i < sizeof ops; ++i)
symbols[ops[i]] = i+1;

or

struct
{
char *expr;
int value;
} lexer ['z' - 'a'] = { ['b' - 'a'] = { "[a-zA-Z]", 0 }, ... };
No reasonable (portable) C equivalent that I can think of.
I'm not even sure what some of those expressions are *meant*
to do -- aren't you making the implicit assumption that ('b'-'a')
is equal to 1? So why not write 1? Why not write 25 instead of
('z' - 'a'), and save yourself the whole ASCII assumption?
Once you've gotten that far, it's trivial to re-write the code
in either the way I wrote above, or as a C89-style initializer
(simply drop the array indices).
or

struct flag
{
bool one;
bool other;
};
struct flag
{
int one;
int other;
};

which I tend to do, even in C++, which has had 'bool' longer than
C, I think (although I do write methods returning 'bool' when it
makes sense -- I guess the 'int' thing is just a habit at this point).
struct flag initFlag =
{
.one = false,
.other = true
};
(at global scope)
struct flag initFlag = { 0, 1 };

:-) I see how the C99 way is better, yes.

FILE *fp;
#define PRINTHIS(...) fprintf (fp, __VA_ARGS__)
#include <stdio.h>
#include <stdarg.h>
FILE *fp;
void PRINTHIS(const char *s, ...)
{
va_list ap;
va_start(ap, s);
vfprintf(fp, s, ap);
va_end(ap);
}

The "C89 way" even does some typechecking on the first
parameter! :-)

#define varIF(...) if (__VA_ARGS__) {
#define varIF(x) if (x) {
or
#define varIF(parenthes ized) if parenthesized {
or
#undef varIF /* why use such a thing in the first place? */

#define VERIFY(...) { int table[] = { __VA_ARGS__, 1 }; printf("%d",tab le);


Huh? Here, there's a mismatch in the format specifier for
printf(); the variadic portion of the macro arguments can only
take on 'int' values; called with no arguments, the macro will
not even compile; the macro is not brace-balanced (missing one
closing } brace); and I have no idea what its point is.
But certainly it can't be compiled with a C89 compiler, either!

my $.02,
-Arthur
Nov 13 '05 #96
"Arthur J. O'Dwyer" <aj*@nospam.and rew.cmu.edu> wrote:

struct flag
{
bool one;
bool other;
};
struct flag
{
int one;
int other;
};

which I tend to do, even in C++, which has had 'bool' longer than
C, I think (although I do write methods returning 'bool' when it
makes sense -- I guess the 'int' thing is just a habit at this point).
struct flag initFlag =
{
.one = false,
.other = true
};
(at global scope)


struct flag initFlag = { 0, 1 };

:-) I see how the C99 way is better, yes.


Ok... but what if

struct problem
{
bool one;
bool other;
int someother;
char anyway[80];
float thank;
void *you;
double very_much;
};

struct flag initFlag =
{
.thank = 5.4,
.very_much = 4.5
};

and initFlag is still global...

I guess this should be something like

struct flag initFlag = { 0, 0, 0, {0}, 5.4, 0, 4.5 };

but I think C99 syntax is more clear...
#define VERIFY(...) { int table[] = { __VA_ARGS__, 1 };

printf("%d",tab le);
Huh? Here, there's a mismatch in the format specifier for
printf(); the variadic portion of the macro arguments can only
take on 'int' values; called with no arguments, the macro will
not even compile; the macro is not brace-balanced (missing one
closing } brace); and I have no idea what its point is.
But certainly it can't be compiled with a C89 compiler, either!


In fact the missing brace is a typo...

Thank you for your explanations ^^



Nov 13 '05 #97
In article <bq**********@n ews.tudelft.nl> , si****@jigsaw.n l says...
Paul Hsieh wrote:
Sidney Cadot <si****@jigsaw. nl> wrote:
[...] I for one would be happy if more compilers would
fully start to support C99, It will be a good day when I can actually
start to use many of the new features without having to worry about
portability too much, as is the current situation.
I don't think that day will ever come. In its totallity C99 is almost
completely worthless in real world environments. Vendors will be
smart to pick up restrict and few of the goodies in C99 and just stop
there.
Want to take a bet...?


Sure. Vendors are waiting to see what the C++ people do, because they are well
aware of the unreconcilable conflicts that have arisen. Bjarne and crew are
going to be forced to take the new stuff C99 in the bits and pieces that don't
cause any conflict or aren't otherwise stupid for other reasons. The Vendors
are going to look at this and decide that the subset of C99 that the C++ people
chose will be the least problematic solution and just go with that.
If instead, the preprocessor were a lot more functional, then you
could simply extract packed offsets from a list of declarations and
literally plug them in as offsets into a char[] and do the slow memcpy
operations yourself.


This would violate the division between preprocessor and compiler too
much (the preprocessor would have to understand quite a lot of C semantics).


No, that's not what I am proposing. I am saying that you should not use
structs at all, but you can use the contents of them as a list of comma
seperated entries. With a more beefed up preprocessor one could find the
offset of a packed char array that corresponds to the nth element of the list
as a sum of sizeof()'s and you'd be off to the races. So I am not proposing
that the preprocessor know anything more about the C language at all. I am
instead proposing that it be better at what it *does* know about -- numbers,
macros, and various C-language compatible tokens.
* a clear statement concerning the minimal level of active function
calls invocations that an implementation needs to support.
Currently, recursive programs will stackfault at a certain point,
and this situation is not handled satisfactorily in the standard
(it is not adressed at all, that is), as far as I can tell.
That doesn't seem possible. The amount of "stack" that an
implementation might use for a given function is clearly not easy to
define. Better to just leave this loose.


It's not easy to define, that's for sure. But to call into recollection
a post from six weeks ago: [...] ...This is legal C (as per the Standard),
but it overflows the stack on any implementation (which is usually a
sumptom of UB). Why is there no statement in the standard that even so much
as hints at this?


isgraph(-1) is also legal C -- *SYNTACTICALLY* . There is no end of problems
with the C programming environment. To gripe about runtime stack depth
limitations alone I think is kind of pointless. C is a language suitable for
and high encouraging of writing extremely unsound and poor code. Fixing it
would require a major overhaul of the language and library.
* a library function that allows the retrieval of the size of a memory
block previously allocated using "malloc"/"calloc"/"realloc" and
friends.


There's a lot more that you can do as well. Such as a tryexpand()
function which works like realloc except that it performs no action
except returning with some sort of error status if the block cannot be
resized without moving its base pointer. Further, one would like to
be able to manage *multiple* heaps, and have a freeall() function --
it would make the problem of memory leaks much more manageable for
many applications. It would almost make some cases enormously faster.


But this is perhaps territory that the Standard should steer clear of,
more like something a well-written and dedicated third-party library
could provide.


But a third party library can't do this portably. Its actual useful
functionality that you just can't get from the C language, and there's no way
to reliably map such functionality to the C language itself. One is forced to
know the details of the underlying platform to implement such things. Its
something that really *should* be in the language.
* a #define'd constant in stdio.h that gives the maximal number of
characters that a "%p" format specifier can emit. Likewise, for
other format specifiers such as "%d" and the like.

* a printf format specifier for printing numbers in base-2.
Ah -- the kludge request.


I'd rather see this as filling in a gaping hole.
Rather than adding format specifiers one at
a time, why not instead add in a way of being able to plug in
programmer-defined format specifiers?


Because that's difficult to get right (unlike a proposed binary output
form).


There are sources for snprintf available that can do it. You are asking for
this feature because you think it would be useful *FOR YOU*. I convert hex to
binary in my head without barely thinking and would rather use the screen space
for more pertinant things, so it would not be useful for me. My proposal
allows the programmer to decide what is or is not useful them.
> I think people in general would
like to use printf for printing out more than just the base types in a
collection of just a few formats defined at the whims of some 70s UNIX
hackers. Why not be able to print out your data structures, or
relevant parts of them as you see fit?


The %x format specifier mechanism is perhaps not a good way to do this,
if only because it would only allow something like 15 extra output formats.


I'm not sure what you are saying here. You all of a sudden don't like the hex
printing format? And why is having more, user definable print formats a bad
thing?
* I think I would like to see a real string-type as a first-class
citizen in C, implemented as a native type. But this would open
up too big a can of worms, I am afraid, and a good case can be
made that this violates the principles of C too much (being a
low-level language and all).


The problem is that real string handling requires memory handling.
The other primitive types in C are flat structures that are fixed
width. You either need something like C++'s constructor/destructor
semantics or automatic garbage collection otherwise you're going to
have some trouble with memory leaking.


A very simple reference-counting implementation would suffice. [...]


This would complexify the compiler to no end. Its also hard to account for a
reference that was arrived at via something like "memcpy".
* Normative statements on the upper-bound worst-case asymptotic
behavior of things like qsort() and bsearch() would be nice.


Yeah, it would be nice to catch up to where the C++ people have gone
some years ago.


I don't think it is a silly idea to have some consideration for
worst-case performance in the standard, especially for algorithmic
functions (of which qsort and bsearch are the most prominent examples).


Perhaps you misunderstand me. The fact the C committee *DIDN'T* do this is an
abomination. STL includes some kind of sorting mechanisms which are now
guaranteed to be O(n*log(n)) because of the existence of an algorithm called
"INTROSORT" (which is really just a quicksort that aborts when it realizes its
going too slow, and switches to heapsort -- but the authors think its clever
because they do this determiniation recursively.)
* a "reverse comma" type expression, for example denoted by
a reverse apastrophe, where the leftmost value is the value
of the entire expression, but the right-hand side is also
guaranteed to be executed.


This seems too esoteric.


Why is it any more esoteric than having a comma operator?


I didn't say was. I've never used the comma operator outside of an occasional
extra command at the end of the increment statement in a for loop in my life.
I consider comma to be esoteric as well.
* triple-&& and triple-|| operators: &&& and ||| with semantics
like the 'and' and 'or' operators in python:

a &&& b ---> if (a) then b else a
a ||| b ---> if (a) then a else b

(I think this is brilliant, and actually useful sometimes).


Hmmm ... why not instead have ordinary operator overloading?


I'll provide three reasons.

1) because it is something completely different


Yeah its a superset that has been embraced by the C++ community.
2) because it is quite unrelated (I don't get the 'instead')
I'm saying that you could have &&&, |||, but just don't defined what they
actually do. Require that the programmer define what they do. C doesn't have
type-specific functions, and if one were to add in operator overloading in a
consistent way, then that would mean that an operator overload would have to
accept only its defined type. For this to be useful without losing the
operators that already exist in C, the right answer is to *ADD* operators. In
fact I would suggest that one simply defined a grammar for such operators, and
allow *ALL* such operators to be definable.
3) because operator overloading is mostly a bad idea, IMHO
Well, Bjarne Stroustrup has made a recent impassioned request to *REMOVE*
features from C++. I highly doubt that operator overloading is one that has
been made or would be taken seriously. I.e., I don't think a credible
population of people who have been exposed to it would consider it a bad idea.
While
this is sometimes a useful shorthand, I am sure that different
applications have different list cutesy compactions that would be
worth while instead of the one above.


... I'd like to see them. &&& is a bit silly (it's fully equivalent to
"a ? b : 0") but ||| (or ?: in gcc) is actually quite useful.


But there are no end of little cheesy operators that one could add. For
example, a <> b to swap a and b, a <<< b to rotate a by b bits, @ a to find the
highest bit of a, etc., etc., etc. All of these are good, in some cases. And
I think that there would be no end to the number of useful operators that one
might like to add to a program. I think your proposal is DOA because you
cannot make a credible case as to why your operator in particular has any value
over any of number of other operators that you might like to add.

Adding operator overloading, however, would be a real extension and would in a
sense address *all* these issues.
* a way to "bitwise invert" a variable without actually
assigning, complementing "&=", "|=", and friends.


Is a ~= a really that much of a burden to type?


It's more a strain on the brain to me, why there are coupled
assignment/operators for neigh all binary operators, but not for this
unary one.


Ok, but then again this is just a particular thing with you.
* 'min' and 'max' operators (following gcc: ?< and ?>)


As I mentioned above, you might as well have operator overloading instead.


Now I would ask you: which existing operator would you like to overload
for, say, integers, to mean "min" and "max" ?


How about a <==> b for max and a >==< b for min? I personally don't care that
much.
* a div and matching mod operator that round to -infinity,
to complement the current less useful semantics of rounding
towards zero.
Well ... but this is the very least of the kinds of arithmetic operator
extensions that one would want. A widening multiply operation is
almost *imperative*. It always floors me that other languages are not
picking this up. Nearly every modern microprocessor in existence has
a widening multiply operation -- because the CPU manufacturer *KNOW*
its necessary. And yet its not accessible from any language.


...It already is available in C, given a good-enough compiler. Look at
the code gcc spits out when you do:

unsigned long a = rand();
unsigned long b = rand();

unsigned long long c = (unsigned long long)a * b;


Yes I'm sure the same trick works for chars and shorts. So how do you widen a
long long multiply?!?!? What compiler trick are you going to hope for to
capture this? What you show here is just some trivial *SMALL* multiply, that
relies on the whims of the optimizer.

PowerPC, Alpha, Itanium, UltraSPARC and AMD64 all have widening multiplies that
take two 64 bit operands and returns a 128 bit result in a pair of 64 bit
operands. They all invest a *LOT* of transistors to do this *ONE* operation.
They all *KNOW* you can't finagle any C/C++ compiler to produce the operation,
yet they still do it -- its *THAT* important (hint: SSL, and therefore *ALL* of
e-commerce, uses it.)
Probably because most languages have been written on top of C or C++.
And what about a simple carry capturing addition?


Many languages exists where this is possible, they are called
"assembly". There is no way that you could come up with a well-defined
semantics for this.


carry +< var = a + b;
Did you know that a PowerPC processor doesn't have a shift-right where
you can capture the carry bit in one instruction? Silly but no less true.


What has this got to do with anything? Capturing carries coming out of shifts
don't show up in any significant algorithms that I am aware of that are
significantly faster than using what we have already. The specific operations
I am citing make a *HUGE* difference and have billion dollar price tags
associated with them.

I understand the need for the C language standard to be applicable to as many
platforms as possible. But unlike some right shift detail that you are talking
about, the widening multiply hardware actually *IS* deployed everywhere.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/
Nov 13 '05 #98
Paul Hsieh wrote:
In article <bq**********@n ews.tudelft.nl> , si****@jigsaw.n l says...
Paul Hsieh wrote:
Sidney Cadot <si****@jigsaw. nl> wrote:

[...] I for one would be happy if more compilers would
fully start to support C99, It will be a good day when I can actually
start to use many of the new features without having to worry about
portabili ty too much, as is the current situation.
I don't think that day will ever come. In its totallity C99 is almost
completely worthless in real world environments. Vendors will be
smart to pick up restrict and few of the goodies in C99 and just stop
there.


Want to take a bet...?

Sure. Vendors are waiting to see what the C++ people do, because they are well
aware of the unreconcilable conflicts that have arisen. Bjarne and crew are
going to be forced to take the new stuff C99 in the bits and pieces that don't
cause any conflict or aren't otherwise stupid for other reasons. The Vendors
are going to look at this and decide that the subset of C99 that the C++ people
chose will be the least problematic solution and just go with that.


Ok. I'll give you 10:1 odds; there will be a (near-perfect) C99 compiler
by the end of this decade.
If instead, the preprocessor were a lot more functional, then you
could simply extract packed offsets from a list of declarations and
literally plug them in as offsets into a char[] and do the slow memcpy
operations yourself.


This would violate the division between preprocessor and compiler too
much (the preprocessor would have to understand quite a lot of C semantics).

No, that's not what I am proposing. I am saying that you should not use
structs at all, but you can use the contents of them as a list of comma
seperated entries. With a more beefed up preprocessor one could find the
offset of a packed char array that corresponds to the nth element of the list
as a sum of sizeof()'s and you'd be off to the races.


Perhaps I'm missing something here, but wouldn't it be easier to use the
offsetof() macro?
So I am not proposing
that the preprocessor know anything more about the C language at all. I am
instead proposing that it be better at what it *does* know about -- numbers,
macros, and various C-language compatible tokens.
Ok. It may make sense to extend the preprocessor.
It's not easy to define, that's for sure. But to call into recollection
a post from six weeks ago: [...] ...This is legal C (as per the Standard),
but it overflows the stack on any implementation (which is usually a
sumptom of UB). Why is there no statement in the standard that even so much
as hints at this? isgraph(-1) is also legal C -- *SYNTACTICALLY* . There is no end of problems
with the C programming environment. To gripe about runtime stack depth
limitations alone I think is kind of pointless.
Well, I showed a perfectly legal C program that should happily run and
terminate if I am to believe the standard, yet it doesn't do that on any
architecture. Excuse me for being a bit unhappy about that.
C is a language suitable for
and high encouraging of writing extremely unsound and poor code. Fixing it
would require a major overhaul of the language and library.
That's true. I don't quite see how this relates to the preceding
statement though.
There's a lot more that you can do as well. Such as a tryexpand()
function which works like realloc except that it performs no action
except returning with some sort of error status if the block cannot be
resized without moving its base pointer. Further, one would like to
be able to manage *multiple* heaps, and have a freeall() function --
it would make the problem of memory leaks much more manageable for
many applications. It would almost make some cases enormously faster.


But this is perhaps territory that the Standard should steer clear of,
more like something a well-written and dedicated third-party library
could provide. But a third party library can't do this portably.
I don't see why not?
Its actual useful
functionality that you just can't get from the C language, and there's no way
to reliably map such functionality to the C language itself. One is forced to
know the details of the underlying platform to implement such things. Its
something that really *should* be in the language.
Well, it looks to me you're proposing to have a feature-rich heap
manager. I honestly don't see you this couldn't be implemented portably
without platform-specific knowledge. Could you elaborate?
* a printf format specifier for printing numbers in base-2.

Ah -- the kludge request.


I'd rather see this as filling in a gaping hole.
Rather than adding format specifiers one at
a time, why not instead add in a way of being able to plug in
programmer-defined format specifiers?


Because that's difficult to get right (unlike a proposed binary output
form). There are sources for snprintf available that can do it. You are asking for
this feature because you think it would be useful *FOR YOU*.
Yes, and for a bunch of other people.
I convert hex to binary in my head without barely thinking and would rather
use the screen space for more pertinant things, so it would not be useful for me.

I can do it blindfolded, with my hands tied behind my back, at an
ambient noise level that makes most people lose bladder control. I
wouldn't use it all the time mind you, but it's just plain silly to be
able to do hex, octal, and decimal, but not binary. I want this more for
reasons of orthogonality in design than anything else.
My proposal allows the programmer to decide what is or is not useful them.
I'm all for that.
> I think people in general would
like to use printf for printing out more than just the base types in a
collection of just a few formats defined at the whims of some 70s UNIX
hackers. Why not be able to print out your data structures, or
relevant parts of them as you see fit?
I don't think it's too bad an idea (although I have never gotten round
to trying the mechanism gcc provides for this). In any case, this kind
of thing is so much more naturally done in a OOP-supporting language
like C++ . Without being bellingerent: why not use that if you want this
kind of thing?
The %x format specifier mechanism is perhaps not a good way to do this,
if only because it would only allow something like 15 extra output formats. I'm not sure what you are saying here. You all of a sudden don't like the hex
printing format? And why is having more, user definable print formats a bad
thing?
I used "%x" as an example of a format specifier that isn't defined ('x'
being a placeholder for any letter that hasn't been taken by the
standard). The statement is that there'd be only 15 about letters left
for this kind of thing (including 'x' by the way -- it's not a hex
specifier). Sorry for the confusion, I should've been clearer.
* I think I would like to see a real string-type as a first-class
citizen in C, implemented as a native type. But this would open
up too big a can of worms, I am afraid, and a good case can be
made that this violates the principles of C too much (being a
low-level language and all).

The problem is that real string handling requires memory handling.
The other primitive types in C are flat structures that are fixed
width. You either need something like C++'s constructor/destructor
semantics or automatic garbage collection otherwise you're going to
have some trouble with memory leaking.


A very simple reference-counting implementation would suffice. [...]


This would complexify the compiler to no end. Its also hard to account for a
reference that was arrived at via something like "memcpy".


A first-class citizen string wouldn't be a pointer; neither would you
necessarily be able to get its address (although you should be able to
get the address of the characters it contains).
* Normative statements on the upper-bound worst-case asymptotic
behavior of things like qsort() and bsearch() would be nice.

Yeah, it would be nice to catch up to where the C++ people have gone
some years ago.


I don't think it is a silly idea to have some consideration for
worst-case performance in the standard, especially for algorithmic
functions (of which qsort and bsearch are the most prominent examples).


Perhaps you misunderstand me. The fact the C committee *DIDN'T* do this is an
abomination. STL includes some kind of sorting mechanisms which are now
guaranteed to be O(n*log(n)) because of the existence of an algorithm called
"INTROSORT" (which is really just a quicksort that aborts when it realizes its
going too slow, and switches to heapsort -- but the authors think its clever
because they do this determiniation recursively.)


Sorry about that, I thought you were sarcastic. Ok, then we agree on
this. Moving on...
* a "reverse comma" type expresion, for example denoted by
a reverse apastrophe, where the leftmost value is the value
of the entire expression, but the right-hand side is also
guaranteed to be executed.

This seems too esoteric.


Why is it any more esoteric than having a comma operator?


I didn't say was. I've never used the comma operator outside of an occasional
extra command at the end of the increment statement in a for loop in my life.
I consider comma to be esoteric as well.


Ok, that's a valid opinion that I don't happen to share.
* triple-&& and triple-|| operators: &&& and ||| with semantics
like the 'and' and 'or' operators in python:

a &&& b ---> if (a) then b else a
a ||| b ---> if (a) then a else b

(I think this is brilliant, and actually useful sometimes).

Hmmm ... why not instead have ordinary operator overloading?


I'll provide three reasons.

1) because it is something completely different Yeah its a superset that has been embraced by the C++ community.
It's a superset only if the C language would have a ||| or &&& operator
in the first place. Which (much to my dismay) it doesn't.
2) because it is quite unrelated (I don't get the 'instead') I'm saying that you could have &&&, |||, but just don't defined what they
actually do. Require that the programmer define what they do. C doesn't have
type-specific functions, and if one were to add in operator overloading in a
consistent way, then that would mean that an operator overload would have to
accept only its defined type.
Ok, so the language should have a big bunch of operators, ready for the
taking. Incidentally, Mathematica supports this, if you want it badly.
For this to be useful without losing the
operators that already exist in C, the right answer is to *ADD* operators. In
fact I would suggest that one simply defined a grammar for such operators, and
allow *ALL* such operators to be definable.
This seems to me a bad idea for a multitude of reasons. First, it would
complicate most stages of the compiler considerably. Second, a
maintenance nightmare ensues: while the standard operators of C are
basically burnt into my soul, I'd have to get used to the Fantasy
Operator Of The Month every time I take on a new project, originally
programmed by someone else.

There's a good reason that we use things like '+' and '*' pervasively,
in many situations; they are short, and easily absorbed in many
contexts. Self-defined operator tokens (consisting, of course, of
'atomic' operators like '+', '=', '<' ...) will lead to unreadable code,
I think; perhaps something akin to a complicated 'sed' script.
3) because operator overloading is mostly a bad idea, IMHO Well, Bjarne Stroustrup has made a recent impassioned request to *REMOVE*
features from C++.
Do you have a reference? That's bound to be a fun read, and he probably
missed a few candidates.
I highly doubt that operator overloading is one that has
been made or would be taken seriously. I.e., I don't think a credible
population of people who have been exposed to it would consider it a bad idea.
I can only speak for myself; I have been exposed, and think it's a bad
idea. When used very sparsely, it has it's uses. However, introducing
new user-definable operators as you propose would be folly; the only way
operator overloading works in practice is if you maintain some sort of
link to the intuitive meaning of an operator. User defined operators
lack this by definition.
While
this is sometimes a useful shorthand, I am sure that different
applicatio ns have different list cutesy compactions that would be
worth while instead of the one above.


... I'd like to see them. &&& is a bit silly (it's fully equivalent to
"a ? b : 0") but ||| (or ?: in gcc) is actually quite useful. But there are no end of little cheesy operators that one could add. For
example, a <> b to swap a and b, a <<< b to rotate a by b bits, @ a to find the
highest bit of a, etc., etc., etc.
"<>" would be a bad choice, since it is easy to confuse for "not equal
to". I've programmed a bit in IDL for a while, which has my dear "min"
and "max" operators.... It's a pity they are denoted "<" and ">",
leading to heaps of misery by confusion.

<<< and @ are nice though. I would be almost in favour of adding them,
were it not for the fact that this would drive C dangerously close in
the direction of APL.
All of these are good, in some cases. And
I think that there would be no end to the number of useful operators that one
might like to add to a program. I think your proposal is DOA because you
cannot make a credible case as to why your operator in particular has any value
over any of number of other operators that you might like to add.
Adding operator overloading, however, would be a real extension and would in a
sense address *all* these issues.
Again I wonder, seriously: wouldn't you be better of using C++ ?
It's more a strain on the brain to me, why there are coupled
assignment/operators for neigh all binary operators, but not for this
unary one. Ok, but then again this is just a particular thing with you.
Guilty as charged.
* 'min' and 'max' operators (following gcc: ?< and ?>)

As I mentioned above, you might as well have operator overloading instead.
Sure, but you're talking about something that goes a lot further than
run-off-the-mill operator overloading. I think the simple way would be
to just introduce these min and max operators and be done with it.

"min" and "max" are perhaps less important than "+" and "*", but they
are probably the most-used operations that are not available right now
as operators. If we are going to extend C with new operators, they would
be the most natural choice I think.
Now I would ask you: which existing operator would you like to overload
for, say, integers, to mean "min" and "max" ? How about a <==> b for max and a >==< b for min? I personally don't care that
much.
Those are not existing operators, as you know. They would have to be
defined in your curious "operator definition" scheme.

I find the idea freaky, yet interesting. I think C is not the place for
this (really, it would be too easy to compete in the IOCCC) but perhaps
in another language... Just to follow your argument for a bit, what
would an "operator definition" declaration look like for, say, the "?<"
min operator in your hypothetical extended C?
Yes I'm sure the same trick works for chars and shorts. So how do you widen a
long long multiply?!?!? What compiler trick are you going to hope for to
capture this? What you show here is just some trivial *SMALL* multiply, that
relies on the whims of the optimizer.
Well, I'd show you, but it's impossible _in principle_. Given that you
are multiplying two expressions of the widest type supported by your
compiler, where would it store the result?
PowerPC, Alpha, Itanium, UltraSPARC and AMD64 all have widening multiplies that
take two 64 bit operands and returns a 128 bit result in a pair of 64 bit
operands. They all invest a *LOT* of transistors to do this *ONE* operation.
They all *KNOW* you can't finagle any C/C++ compiler to produce the operation,
yet they still do it -- its *THAT* important (hint: SSL, and therefore *ALL* of
e-commerce, uses it.)
Well, I don't know if these dozen-or-so big-number 'powermod' operations
that are needed to establish an SSL connection are such a big deal as
you make it.
Probably because most languages have been written on top of C or C++.
And what about a simple carry capturing addition?


Many languages exists where this is possible, they are called
"assembly". There is no way that you could come up with a well-defined
semantics for this.

carry +< var = a + b;
It looks cute, I'll give you that. Could you please provide semantics?
It may be a lot less self evident than you think.
Did you know that a PowerPC processor doesn't have a shift-right where
you can capture the carry bit in one instruction? Silly but no less true.

What has this got to do with anything? Capturing carries coming out of shifts
don't show up in any significant algorithms that I am aware of
that are significantly faster than using what we have already.
Ah, I see you've never implemented a non-table-driven CRC or a binary
greatest common divisor algorithm. They are both hard at work when you
establish an SSL connection.
The specific operations I am citing make a *HUGE* difference and have billion
dollar price tags associated with them.
These numbers you made up from thin air, no? otherwise, I'd welcome a
reference.
I understand the need for the C language standard to be applicable to as many
platforms as possible. But unlike some right shift detail that you are talking
about, the widening multiply hardware actually *IS* deployed everywhere.


Sure is. Several good big-number libraries are available that have
processor-dependent machine code to do just this.

Best regards,

Sidney

Nov 13 '05 #99
"Lorenzo Villari" <vl****@tiscali .it> wrote:
struct
{
char *expr;
int value;
} lexer ['z' - 'a'] = { ['b' - 'a'] = { "[a-zA-Z]", 0 }, ... };


This is unportable. The only characters that can be reliably
subtracted in C are the digits. '0' through '9' are guaranteed
to be consecutive. Letters need not be consecutive, and need
not even be in alphabetical order.

--
Simon.
Nov 13 '05 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
3456
by: Anthony Martin | last post by:
I've been reading the Java Language Specification, and in Chapter 16 there's an interesting topic called Definite Assignment. http://tinyurl.com/3fqk8 I'm wondering about the idea of "Deferred Final Automatic Variables" like the following: void unflow(boolean flag) { final int k;
14
23153
by: Medi Montaseri | last post by:
Hi, I think my problem is indeed "how to implement something like java's final in C++" The long version.... I have an abstract base class which is inherited by several concrete classes. I have a group of methods that I'd like to implement in the base class
48
8734
by: David J Patrick | last post by:
I'm trying to rewrite the CSS used in http://s92415866.onlinehome.us/files/ScreenplayCSSv2.html. using the w3.org paged media standards as described at http://www.w3.org/TR/REC-CSS2/page.html The ScreenplayCSS is flawed, for several reasons; -overuse of <div id= tags -doesn't scale screen resolutions (convert from px to in, pt ?) -no media="print" (how much coule be shared between "screen" & "print") -no automatic page breaks (with...
10
5126
by: Bezalel Bareli | last post by:
I know I have seen some threads on the subject long time ago and it was using a virtual base class ... in short, what is the nicest way to implement the Java final class in c++ Thanks.
14
1780
by: My4thPersonality | last post by:
Has the fact that both Java and C# are garbage collected, and C++ in not, anything to do with the fact that there is no language item to prevent a class from being inherired from? I once read that Java and C# implement this feature for preformance, but the C++ creators said it was not worse the effort. So because Java and C# are garbage collected, in their case is it worse the effort? What is the connection?
1
8625
by: silverburgh.meryl | last post by:
I am trying to convert this code from java to c++: public final class Type { public static final int DEFAULT = 1; private static int index = 2; public static final int COLUMN1 = (int) Math.pow(2, index++); public static final int COLUMN2 = (int) Math.pow(2, index++); public static final int COLUMN3 = (int) Math.pow(2, index++); public static final int COLUMN4 = (int) Math.pow(2, index++);
5
1403
by: Anthony Baxter | last post by:
On behalf of the Python development team and the Python community, I'm happy to announce the release of Python 2.4.3 (final). Python 2.4.3 is a bug-fix release. See the release notes at the website (also available as Misc/NEWS in the source distribution) for details of the more than 50 bugs squished in this release, including a number found by the Coverity Scan project. Assuming no major bugs pop up, the next release of Python will be...
14
3004
by: Rahul | last post by:
Hi Everyone, I was searching for final class in c++, and i came across few links which suggests to have the constructor of the, to be final class, as private so that any derived class's constructors can't access the same. class C { private:
1
1710
by: Rajib | last post by:
Not that this serves any real purpose, but gcc allows me to do some hack like this: class hide_A { public: class A { public: virtual int final() { return 42; } };
0
9967
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9810
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10792
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10895
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9611
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7998
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
7151
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5830
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
4650
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.