By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,098 Members | 1,881 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,098 IT Pros & Developers. It's quick & easy.

is "typedef int int;" illegal????

P: n/a
Hi

Suppose you have somewhere

#define BOOL int

and somewhere else

typedef BOOL int;

This gives

typedef int int;

To me, this looks like a null assignment:

a = a;

Would it break something if lcc-win32 would accept that,
maybe with a warning?

Is the compiler *required* to reject that?

Microsoft MSVC: rejects it.
lcc-win32 now rejects it.
gcc (with no flags) accepts it with some warnnings.

Thanks

jacob
---
A free compiler system for windows:
http://www.cs.virginia.edu/~lcc-win32

Mar 24 '06 #1
Share this Question
Share on Google+
134 Replies


P: n/a
jacob navia wrote:
Hi

Suppose you have somewhere

#define BOOL int

and somewhere else

typedef BOOL int;

This gives

typedef int int;
The typedef name must be an identifier, here you have a keyword.
To me, this looks like a null assignment:

a = a;
Maybe from an implementor's point of view but the Standard does not
allow it.
Would it break something if lcc-win32 would accept that,
maybe with a warning?
A diagnostic would be required, after that you can do anything you
want, including continuing to compile the program as you normally
would.
Is the compiler *required* to reject that?
Define reject. You are required to produce a diagnostic, that's it.
Microsoft MSVC: rejects it.
Good.
lcc-win32 now rejects it.
Good.
gcc (with no flags) accepts it with some warnnings.


It fails to compile for me (gcc 4.0.2):
error: two or more data types in declaration specifiers

Robert Gamble

Mar 24 '06 #2

P: n/a
Robert Gamble a écrit :
jacob navia wrote:
Hi

Suppose you have somewhere

#define BOOL int

and somewhere else

typedef BOOL int;

This gives

typedef int int;

The typedef name must be an identifier, here you have a keyword.

To me, this looks like a null assignment:

a = a;

Maybe from an implementor's point of view but the Standard does not
allow it.

Would it break something if lcc-win32 would accept that,
maybe with a warning?

A diagnostic would be required, after that you can do anything you
want, including continuing to compile the program as you normally
would.


OK, that is what I had in mind.
Is the compiler *required* to reject that?

Define reject. You are required to produce a diagnostic, that's it.


Reject means the program fails to compile. A warning is not a reject
since the program compiles.
Microsoft MSVC: rejects it.

Good.

lcc-win32 now rejects it.

Good.

gcc (with no flags) accepts it with some warnnings.

It fails to compile for me (gcc 4.0.2):
error: two or more data types in declaration specifiers


Strange, I get the following:

[root@gateway root]# gcc -v
Reading specs from /usr/lib/gcc-lib/i586-mandrake-linux-gnu/2.96/specs
gcc version 2.96 20000731 (Mandrake Linux 8.2 2.96-0.76mdk)
[root@gateway root]# cat tint.c
typedef int int;
[root@gateway root]# gcc -c tint.c
tint.c:1: warning: useless keyword or type name in empty declaration
tint.c:1: warning: empty declaration
[root@gateway root]# ls -l tint.o
-rw-r--r-- 1 root root 703 Mar 24 16:17 tint.o
[root@gateway root]#

Program is not rejected.
Robert Gamble

Mar 24 '06 #3

P: n/a
jacob navia wrote:
Robert Gamble a écrit :
jacob navia wrote:
Hi

Suppose you have somewhere

#define BOOL int

and somewhere else

typedef BOOL int;

This gives

typedef int int;

The typedef name must be an identifier, here you have a keyword.

To me, this looks like a null assignment:

a = a;

Maybe from an implementor's point of view but the Standard does not
allow it.

Would it break something if lcc-win32 would accept that,
maybe with a warning?

A diagnostic would be required, after that you can do anything you
want, including continuing to compile the program as you normally
would.


OK, that is what I had in mind.
Is the compiler *required* to reject that?

Define reject. You are required to produce a diagnostic, that's it.


Reject means the program fails to compile. A warning is not a reject
since the program compiles.


With that definition no, you are not required to reject such a program
but you are certainly free to do so.
Microsoft MSVC: rejects it.

Good.

lcc-win32 now rejects it.

Good.

gcc (with no flags) accepts it with some warnnings.

It fails to compile for me (gcc 4.0.2):
error: two or more data types in declaration specifiers


Strange, I get the following:

[root@gateway root]# gcc -v
Reading specs from /usr/lib/gcc-lib/i586-mandrake-linux-gnu/2.96/specs
gcc version 2.96 20000731 (Mandrake Linux 8.2 2.96-0.76mdk)
[root@gateway root]# cat tint.c
typedef int int;
[root@gateway root]# gcc -c tint.c
tint.c:1: warning: useless keyword or type name in empty declaration
tint.c:1: warning: empty declaration
[root@gateway root]# ls -l tint.o
-rw-r--r-- 1 root root 703 Mar 24 16:17 tint.o
[root@gateway root]#

Program is not rejected.


Quite a bit has changed since gcc 2.96 including the strictness of the
syntax and type checking. Note though that a diagnostic was still
produced.

Robert Gamble

Mar 24 '06 #4

P: n/a
jacob navia wrote:
Robert Gamble a écrit :
jacob navia wrote:
Hi

Suppose you have somewhere

#define BOOL int

and somewhere else

typedef BOOL int;

This gives

typedef int int;

The typedef name must be an identifier, here you have a keyword.

To me, this looks like a null assignment:

a = a;

Maybe from an implementor's point of view but the Standard does not
allow it.

Would it break something if lcc-win32 would accept that,
maybe with a warning?

A diagnostic would be required, after that you can do anything you
want, including continuing to compile the program as you normally
would.


OK, that is what I had in mind.
Is the compiler *required* to reject that?

Define reject. You are required to produce a diagnostic, that's it.


Reject means the program fails to compile. A warning is not a reject
since the program compiles.


The standard doesn't ever require a compiler to reject
code; it's quite legal for a C compiler to accept Fortran
code, so long as it prints out a diagnostic (maybe
"This looks like Fortran, not C... compiling it anyway...").

There is a common notion among compilers that a "warning" is
a non-fatal diagnostic and an "error" is a fatal diagnostic
(i.e., one which causes no object code to be produced), but
that is not standardised.
Microsoft MSVC: rejects it.

Good.

lcc-win32 now rejects it.

Good.

gcc (with no flags) accepts it with some warnnings.

It fails to compile for me (gcc 4.0.2):
error: two or more data types in declaration specifiers


Strange, I get the following:

[root@gateway root]# gcc -v
Reading specs from /usr/lib/gcc-lib/i586-mandrake-linux-gnu/2.96/specs
gcc version 2.96 20000731 (Mandrake Linux 8.2 2.96-0.76mdk)
[root@gateway root]# cat tint.c
typedef int int;
[root@gateway root]# gcc -c tint.c
tint.c:1: warning: useless keyword or type name in empty declaration
tint.c:1: warning: empty declaration
[root@gateway root]# ls -l tint.o
-rw-r--r-- 1 root root 703 Mar 24 16:17 tint.o
[root@gateway root]#

Program is not rejected.


That's a nearly 6-year old compiler, and not an official GCC
release at that. Not to say that 2.96 didn't have its uses,
and maybe it still does, but it's far from the state of the
art.

-- James
Mar 24 '06 #5

P: n/a
jacob navia wrote:
Suppose you have somewhere
#define BOOL int
and somewhere else
typedef BOOL int;

This gives
typedef int int;

To me, this looks like a null assignment:
a = a;


Robert Gamble wrote: Maybe from an implementor's point of view but the Standard does not
allow it.
The typedef name must be an identifier, here you have a keyword.


Exactly.

On a related note, I've suggested in the past that duplicate
(redundant) typedefs be allowed as long as they are semantically
equivalent, e.g.:

typedef long mytype; // A
typedef long mytype; // B, error in C99
typedef long int mytype; // C, error in C99

It would introduce no problems if the redundant typedefs at B and C
were allowed. C99 rules, however, disallow this, so we're forced to
do things like the following in all our header files:

// foo.h
#ifndef MYTYPE_DEF
typedef long mytype;
#define MYTYPE_DEF
#endif

// bar.h
#ifndef MYTYPE_DEF
typedef long int mytype;
#define MYTYPE_DEF
#endif
Allowing redundant typedefs parallels the rule allowing redundant
preprocessor macro definitions:

#define SIZE 100 // D
#define SIZE 100 // E, okay, duplicate allowed

This also parallels C++ semantics, which allow duplicate typedefs.

-drt

Mar 24 '06 #6

P: n/a
James Dennett a écrit :
Strange, I get the following:

[root@gateway root]# gcc -v
Reading specs from /usr/lib/gcc-lib/i586-mandrake-linux-gnu/2.96/specs
gcc version 2.96 20000731 (Mandrake Linux 8.2 2.96-0.76mdk)
[root@gateway root]# cat tint.c
typedef int int;
[root@gateway root]# gcc -c tint.c
tint.c:1: warning: useless keyword or type name in empty declaration
tint.c:1: warning: empty declaration
[root@gateway root]# ls -l tint.o
-rw-r--r-- 1 root root 703 Mar 24 16:17 tint.o
[root@gateway root]#

Program is not rejected.

That's a nearly 6-year old compiler, and not an official GCC
release at that. Not to say that 2.96 didn't have its uses,
and maybe it still does, but it's far from the state of the
art.

-- James


Wow, this complicates this quite a bit.
If Microsoft AND gcc reject the code... I think better leave it as it is...
I thought that gcc let it pass with some warnings and intended to do the
same, but it is true that I have not upgraded gcc since quite a while.

thanks
Mar 24 '06 #7

P: n/a


David R Tribble wrote On 03/24/06 11:01,:

On a related note, I've suggested in the past that duplicate
(redundant) typedefs be allowed as long as they are semantically
equivalent, e.g.:

typedef long mytype; // A
typedef long mytype; // B, error in C99
typedef long int mytype; // C, error in C99


It seems to me "semantically equivalent" might
open an unpleasant can of worms. For example, are

typedef unsigned int mytype;
typedef size_t mytype;

"semantically equivalent" on an implementation that
uses `typedef unsigned int size_t;'? What's really
wanted is "equivalence of intent," which seems a
harder notion to pin down.

If the suggestion were modified to require "lexical
equivalence," such questions would disappear and I don't
think the language would be any the worse without them.
Writing header files would perhaps not be quite as much
easier as with "semantic equivalence," but I think would
be a good deal easier than it is now.

--
Er*********@sun.com

Mar 24 '06 #8

P: n/a
jacob navia wrote:
#define BOOL int
typedef BOOL int; Would it break something if lcc-win32 would accept that,
maybe with a warning?
Please don't try to change the language, it doesn't do your
users any favor. Indeed, it merely encourages even more
out-of-control coding
Is the compiler *required* to reject that?


A diagnostic is required.
Mar 24 '06 #9

P: n/a
"Eric Sosman" <Er*********@sun.com> wrote in message
news:e0*********@news1brm.Central.Sun.COM...
David R Tribble wrote On 03/24/06 11:01,:

On a related note, I've suggested in the past that duplicate
(redundant) typedefs be allowed as long as they are semantically
equivalent, e.g.:

typedef long mytype; // A
typedef long mytype; // B, error in C99
typedef long int mytype; // C, error in C99


It seems to me "semantically equivalent" might
open an unpleasant can of worms. For example, are

typedef unsigned int mytype;
typedef size_t mytype;

"semantically equivalent" on an implementation that
uses `typedef unsigned int size_t;'? What's really
wanted is "equivalence of intent," which seems a
harder notion to pin down.


Couldn't it simply use the same rules as declarations do -- i.e. require
compatible types?

extern unsigned int myvar;
extern size_t myvar;


Mar 24 '06 #10

P: n/a
Douglas A. Gwyn a écrit :
jacob navia wrote:
#define BOOL int
typedef BOOL int;


Would it break something if lcc-win32 would accept that,
maybe with a warning?

Please don't try to change the language, it doesn't do your
users any favor. Indeed, it merely encourages even more
out-of-control coding

Is the compiler *required* to reject that?

A diagnostic is required.


Well, "changing the language" is quite an overkill. Gcc 2.xx accepted
that (with warnings). Did they "change the language" ???

But basically why

typedef int int;

should be forbidden? Like

a = a;

it does nothing and the language is not changed, at most is made more
consistent.

But since gcc has changed its behavior in later versions, as I learned
in this forum, and microsoft rejects it, I think I will not do this.

Thanks for your reply
Mar 24 '06 #11

P: n/a
jacob navia wrote :
But basically why

typedef int int;

should be forbidden?


Defining a type that already exists makes no sense.
Mar 24 '06 #12

P: n/a
loufoque wrote:
jacob navia wrote :
But basically why

typedef int int;

should be forbidden?

Defining a type that already exists makes no sense.


But *declaring* something that's already been declared is generally OK
in C. Why does a typedef have to behave like a definition rather than a
declaration -- it doesn't reserve any storage, just binds a type to an
identifier.

(The idea of allowing a keyword for a typedef name is a different story;
personally, I don't like it.)
Mar 24 '06 #13

P: n/a


Wojtek Lerch wrote On 03/24/06 12:25,:
"Eric Sosman" <Er*********@sun.com> wrote in message
news:e0*********@news1brm.Central.Sun.COM...
David R Tribble wrote On 03/24/06 11:01,:
On a related note, I've suggested in the past that duplicate
(redundant) typedefs be allowed as long as they are semantically
equivalent, e.g.:

typedef long mytype; // A
typedef long mytype; // B, error in C99
typedef long int mytype; // C, error in C99


It seems to me "semantically equivalent" might
open an unpleasant can of worms. For example, are

typedef unsigned int mytype;
typedef size_t mytype;

"semantically equivalent" on an implementation that
uses `typedef unsigned int size_t;'? What's really
wanted is "equivalence of intent," which seems a
harder notion to pin down.

Couldn't it simply use the same rules as declarations do -- i.e. require
compatible types?

extern unsigned int myvar;
extern size_t myvar;


Yes it could, and since `typedef' doesn't actually
define types (it just defines aliases) the issue could
be resolved this way. But is that the way we *want* it
resolved, from a portability perspective? It would open
the door (or fail to close the door) to bletcherous
abuses like `typedef time_t ptrdiff_t', things that would
work on some platforms but go horribly wrong on others.

We've got time_t and size_t and int16_t and all the
rest specifically so the programmer has a chance to stay
above the implementation-specific fray. It would seem a
step in the wrong direction to make typedef weaker than
it already is.

(Isn't there a "what if" rule somewhere? ;-)

--
Er*********@sun.com

Mar 24 '06 #14

P: n/a
On Fri, 24 Mar 2006 18:43:32 +0100, in comp.lang.c , jacob navia
<ja***@jacob.remcomp.fr> wrote:
Well, "changing the language" is quite an overkill. Gcc 2.xx accepted
that (with warnings).
A warning is a diagnostic.
Did they "change the language" ???
They emitted a diagnostic.
But basically why

typedef int int;
its meaningless.
should be forbidden? Like

a = a;


it has a meaning, albeit a pointless one.

You might as well say
"why is it incorrect to say 'runned bluer which apple' but ok to say
'prunes are orange' ?"

Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Mar 24 '06 #15

P: n/a
"Eric Sosman" <Er*********@sun.com> wrote in message
news:e0**********@news1brm.Central.Sun.COM...
Wojtek Lerch wrote On 03/24/06 12:25,:
"Eric Sosman" <Er*********@sun.com> wrote in message
news:e0*********@news1brm.Central.Sun.COM... ....
typedef unsigned int mytype;
typedef size_t mytype;
.... Couldn't it simply use the same rules as declarations do -- i.e. require
compatible types?

extern unsigned int myvar;
extern size_t myvar;
Yes it could, and since `typedef' doesn't actually
define types (it just defines aliases) the issue could
be resolved this way. But is that the way we *want* it
resolved, from a portability perspective? It would open
the door (or fail to close the door) to bletcherous
abuses like `typedef time_t ptrdiff_t', things that would
work on some platforms but go horribly wrong on others.


They wouldn't go horribly wrong, just cause a compile error, like they now
do on all platforms. They would behave the same way as this:

time_t fun();
ptrdiff_t fun();

or this:

time_t var;
ptrdiff_t *ptr = &var;

I'd expect these to be more likely to appear in a program by mistake than
your "typedef time_t ptrdiff_t" -- are you more worried about abuses that
are so obviously bletcherous that no sane person would put them in their
code?
We've got time_t and size_t and int16_t and all the
rest specifically so the programmer has a chance to stay
above the implementation-specific fray. It would seem a
step in the wrong direction to make typedef weaker than
it already is.


In the existing C (and, presumably, in the proposed "C with redundant
typedefs"), trying to redefine a *standard* typedef in a program is
undefined behaviour anyway (7.1.3p2). And so is trying to declare a
standard function after including the corresponding header. Nevertheless, C
allows programs to declare the same function twice; do you think that should
be forbidden, too?

Mar 24 '06 #16

P: n/a
Wojtek Lerch a écrit :
In the existing C (and, presumably, in the proposed "C with redundant
typedefs"), trying to redefine a *standard* typedef in a program is
undefined behaviour anyway (7.1.3p2). And so is trying to declare a
standard function after including the corresponding header. Nevertheless, C
allows programs to declare the same function twice; do you think that should
be forbidden, too?


VERY GOOD POINT!
Mar 24 '06 #17

P: n/a
James Dennett wrote:
....
The standard doesn't ever require a compiler to reject
code;


Not quite: section 3.17p4 says :

"The implementation shall not successfully translate a preprocessing
translation unit containing a #error preprocessing directive unless it
is part of a group skipped by conditional inclusion."

However, that is the only exception to your statement.

Mar 24 '06 #18

P: n/a
On 2006-03-24, jacob navia <ja***@jacob.remcomp.fr> wrote:
Robert Gamble a écrit :
jacob navia wrote:
Hi

Suppose you have somewhere

#define BOOL int

and somewhere else

typedef BOOL int;

This gives

typedef int int;

The typedef name must be an identifier, here you have a keyword.

To me, this looks like a null assignment:

a = a;

Maybe from an implementor's point of view but the Standard does not
allow it.

Would it break something if lcc-win32 would accept that,
maybe with a warning?

A diagnostic would be required, after that you can do anything you
want, including continuing to compile the program as you normally
would.


OK, that is what I had in mind.
Is the compiler *required* to reject that?

Define reject. You are required to produce a diagnostic, that's it.


Reject means the program fails to compile. A warning is not a reject
since the program compiles.


There is absolutely nothing that the standard requires to fail to
compile, with the SOLE exception of a program containing the #error
directive.
Mar 24 '06 #19

P: n/a
On 2006-03-24, jacob navia <ja***@jacob.remcomp.fr> wrote:
Douglas A. Gwyn a écrit :
jacob navia wrote:
#define BOOL int
typedef BOOL int;


Would it break something if lcc-win32 would accept that,
maybe with a warning?

Please don't try to change the language, it doesn't do your
users any favor. Indeed, it merely encourages even more
out-of-control coding

Is the compiler *required* to reject that?

A diagnostic is required.


Well, "changing the language" is quite an overkill. Gcc 2.xx accepted
that (with warnings). Did they "change the language" ???


GCC also didn't do what you think it did with it. It interpreted it as
"define (nothing) to be 'int int'", _NOT_ "define int to be int".

And a warning _does_ satisfy the requirement for a diagnostic.
Mar 24 '06 #20

P: n/a
David R Tribble wrote:
On a related note, I've suggested in the past that duplicate
(redundant) typedefs be allowed as long as they are semantically
equivalent, e.g.:

typedef long mytype; // A
typedef long mytype; // B, error in C99
typedef long int mytype; // C, error in C99


Eric Sosman wrote: It seems to me "semantically equivalent" might
open an unpleasant can of worms. For example, are
typedef unsigned int mytype;
typedef size_t mytype;
"semantically equivalent" on an implementation that
uses `typedef unsigned int size_t;'? What's really
wanted is "equivalence of intent," which seems a
harder notion to pin down.
It should mean "semantically equivalent", as in "equivalent types",
to allow C to be compatible with the C++.

If the suggestion were modified to require "lexical
equivalence," such questions would disappear and I don't
think the language would be any the worse without them.
Writing header files would perhaps not be quite as much
easier as with "semantic equivalence," but I think would
be a good deal easier than it is now.


Lexical equivalence is harder for compilers to check than
semantic type equivalence, which is already present in compilers.

-drt

Mar 25 '06 #21

P: n/a
"jacob navia" <ja***@jacob.remcomp.fr> wrote in message
news:44***********************@news.wanadoo.fr...
But basically why

typedef int int;

should be forbidden?


BTW Think about

typedef long long long long;

;-)

Mar 25 '06 #22

P: n/a
loufoque wrote:
Defining a type that already exists makes no sense.


There are numerous issues involved that led to the current
spec for typedef. One of them is that after the first
typedef of a given identifier, that identifier plays a
different role (type synonym) and it would be logical for
it to do so in the second "redundant" typedef (which
happens to result in a syntactic error).
typedef int foo;
typedef foo bar;
typedef bar foo; // would this be allowed?
typedef bar int; // but not this?
Basically this is too fundamental and established in the
language to be messing with. If you design some *new*
language you might want to do it differently.
Mar 25 '06 #23

P: n/a
"Wojtek Lerch" <Wo******@yahoo.ca> wrote in message
news:48************@individual.net...
"jacob navia" <ja***@jacob.remcomp.fr> wrote in message
news:44***********************@news.wanadoo.fr...
But basically why

typedef int int;

should be forbidden?


BTW Think about

typedef long long long long;

;-)


That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another couple
decades? Call them "long long long"? Or if we redefine "long long" to be
128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
ints will be a "long char" or "short short"? Or is a "short short" already
equal to a "char"?

All we need are "int float" and "double int" and the entire C type system
will be perfect! </sarcasm>

S

--
Stephen Sprunk "Stupid people surround themselves with smart
CCIE #3723 people. Smart people surround themselves with
K5SSS smart people who disagree with them." --Aaron Sorkin

*** Free account sponsored by SecureIX.com ***
*** Encrypt your Internet usage with a free VPN account from http://www.SecureIX.com ***
Mar 25 '06 #24

P: n/a
On 2006-03-25, Stephen Sprunk <st*****@sprunk.org> wrote:
"Wojtek Lerch" <Wo******@yahoo.ca> wrote in message
news:48************@individual.net...
"jacob navia" <ja***@jacob.remcomp.fr> wrote in message
news:44***********************@news.wanadoo.fr...
But basically why

typedef int int;

should be forbidden?


BTW Think about

typedef long long long long;

;-)


That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another couple
decades? Call them "long long long"? Or if we redefine "long long" to be
128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
ints will be a "long char" or "short short"? Or is a "short short" already
equal to a "char"?

All we need are "int float" and "double int" and the entire C type system
will be perfect! </sarcasm>


Don't forget short double and long float.

and are _Complex integers legal?
Mar 25 '06 #25

P: n/a
"Stephen Sprunk" <st*****@sprunk.org> writes:
[...]
That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another
couple decades? Call them "long long long"? Or if we redefine "long
long" to be 128-bit ints and "long" to be 64-bit ints, will a 32-bit
int be a "short long" or a "long short"? Maybe 32-bit ints will
become "short" and 16-bit ints will be a "long char" or "short short"?
Or is a "short short" already equal to a "char"?

All we need are "int float" and "double int" and the entire C type
system will be perfect! </sarcasm>


So how would you improve it?

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 25 '06 #26

P: n/a
Keith Thompson opined:
"Stephen Sprunk" <st*****@sprunk.org> writes:

All we need are "int float" and "double int" and the entire C type
system will be perfect! </sarcasm>


So how would you improve it?


Well, obviously by adding `int float` and `double int`. ;-)

--
BR, Vladimir

Everyone can be taught to sculpt: Michelangelo would have had
to be taught how not to. So it is with the great programmers.

Mar 25 '06 #27

P: n/a
James Dennett wrote:
.... snip ...
The standard doesn't ever require a compiler to reject code;
Except, I suppose, if an #error directive is encountered.
it's quite legal for a C compiler to accept Fortran
code, so long as it prints out a diagnostic (maybe
"This looks like Fortran, not C... compiling it anyway...").


In which case it would no longer be a C compiler and would not come
under the restrictions of the C standard.

Mar 25 '06 #28

P: n/a
Keith Thompson wrote:
"Stephen Sprunk" <st*****@sprunk.org> writes:
[...]
That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another
couple decades? Call them "long long long"? Or if we redefine "long
long" to be 128-bit ints and "long" to be 64-bit ints, will a 32-bit
int be a "short long" or a "long short"? Maybe 32-bit ints will
become "short" and 16-bit ints will be a "long char" or "short short"?
Or is a "short short" already equal to a "char"?

All we need are "int float" and "double int" and the entire C type
system will be perfect! </sarcasm>


So how would you improve it?


Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly but
there's nothing that can be done. Calling a 128 bit integer as long
long long would be ridiculous.

Mar 25 '06 #29

P: n/a
On 2006-03-25, santosh <sa*********@gmail.com> wrote:
James Dennett wrote:
... snip ...
The standard doesn't ever require a compiler to reject code;


Except, I suppose, if an #error directive is encountered.
it's quite legal for a C compiler to accept Fortran
code, so long as it prints out a diagnostic (maybe
"This looks like Fortran, not C... compiling it anyway...").


In which case it would no longer be a C compiler and would not come
under the restrictions of the C standard.


If it also compiles C, it has to print a diagnostic on being given non-C
code and being told that it's C. (e.g. gcc -x c)
Mar 25 '06 #30

P: n/a
jacob navia wrote:
James Dennett a écrit :
Strange, I get the following:

[root@gateway root]# gcc -v
Reading specs from
/usr/lib/gcc-lib/i586-mandrake-linux-gnu/2.96/specs gcc version
2.96 20000731 (Mandrake Linux 8.2 2.96-0.76mdk) [root@gateway
root]# cat tint.c typedef int int;
[root@gateway root]# gcc -c tint.c
tint.c:1: warning: useless keyword or type name in empty declaration
tint.c:1: warning: empty declaration
[root@gateway root]# ls -l tint.o
-rw-r--r-- 1 root root 703 Mar 24 16:17 tint.o
[root@gateway root]#

Program is not rejected.

That's a nearly 6-year old compiler, and not an official GCC
release at that. Not to say that 2.96 didn't have its uses,
and maybe it still does, but it's far from the state of the
art.

-- James


Wow, this complicates this quite a bit.
If Microsoft AND gcc reject the code... I think better leave it as it
is... I thought that gcc let it pass with some warnings and intended
to do the same, but it is true that I have not upgraded gcc since
quite a while.


Perhaps you could try it with a couple more modern compilers - just as a
'belt and braces' kind of thing? Aren't the Intel compilers at least free to
try
http://www.intel.com/cd/software/pro...ers/219690.htm,
and then there's Sun's .. http://developers.sun.com/prodtech/cc/index.jsp ..
which *are* free, and might prove useful?
--
==============
*Not a pedant*
==============
Mar 25 '06 #31

P: n/a
Stephen Sprunk wrote:
....
That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another couple
decades? Call them "long long long"? Or if we redefine "long long" to be
128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
ints will be a "long char" or "short short"? Or is a "short short" already
equal to a "char"?
We now have size-named types. That should reduce (but not,
unfortunately, eliminate) the likelihood of similar travesties in the
future.

All we need are "int float" and "double int" and the entire C type system
will be perfect! </sarcasm>


Hey! That's a half-way plausible syntax for declaring a fixed-point
type. ;-) All it needs is some way of specifying the number of digits
after the decimal point.

Mar 25 '06 #32

P: n/a
Jordan Abel wrote:
....
and are _Complex integers legal?


They aren't (6.7.2p2), but conceptually it would be a meaningful
concept, and I suspect there are certain obscure situations where
they'd be useful.

Mar 25 '06 #33

P: n/a
"santosh" <sa*********@gmail.com> writes:
Keith Thompson wrote:
"Stephen Sprunk" <st*****@sprunk.org> writes:
[...]
> That "long long" even exists is a travesty.


So how would you improve it?


Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly but
there's nothing that can be done. [...]


Not a good "solution" in my opinion. I'm sure there are lots of
programs that use each of these identifiers. "long long" doesn't
reserve any previously unreserved identifiers.
--
int main(void){char p[]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuv wxyz.\
\n",*q="kl BIcNBFr.NKEzjwCIxNJC";int i=sizeof p/2;char *strchr();int putchar(\
);while(*q){i+=strchr(p,*q++)-p;if(i>=(int)sizeof p)i-=sizeof p-1;putchar(p[i]\
);}return 0;}
Mar 25 '06 #34

P: n/a
ku****@wizard.net writes:
Jordan Abel wrote:
...
and are _Complex integers legal?


They aren't (6.7.2p2), but conceptually it would be a meaningful
concept, and I suspect there are certain obscure situations where
they'd be useful.


Mathematically, they're called "Gaussian integers".

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 25 '06 #35

P: n/a
santosh a écrit :
Keith Thompson wrote:
"Stephen Sprunk" <st*****@sprunk.org> writes:
[...]
That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another
couple decades? Call them "long long long"? Or if we redefine "long
long" to be 128-bit ints and "long" to be 64-bit ints, will a 32-bit
int be a "short long" or a "long short"? Maybe 32-bit ints will
become "short" and 16-bit ints will be a "long char" or "short short"?
Or is a "short short" already equal to a "char"?

All we need are "int float" and "double int" and the entire C type
system will be perfect! </sarcasm>


So how would you improve it?

Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly but
there's nothing that can be done. Calling a 128 bit integer as long
long long would be ridiculous.

lcc-win32 supports 128 bit integers. The type is named:

int128

Planned is support for 128 bit constants with

i128 m = 85566677766545455544455543344i128;

and

printf("%i128d",m);
Mar 25 '06 #36

P: n/a
Ben Pfaff wrote:
"santosh" <sa*********@gmail.com> writes:

Keith Thompson wrote:
"Stephen Sprunk" <st*****@sprunk.org> writes:
[...]

That "long long" even exists is a travesty.

So how would you improve it?


Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly but
there's nothing that can be done. [...]

Not a good "solution" in my opinion. I'm sure there are lots of
programs that use each of these identifiers. "long long" doesn't
reserve any previously unreserved identifiers.


atoll()?

--
Eric Sosman
es*****@acm-dot-org.invalid
Mar 25 '06 #37

P: n/a
On Fri, 24 Mar 2006 23:22:41 -0600, "Stephen Sprunk"
<st*****@sprunk.org> wrote in comp.lang.c:
"Wojtek Lerch" <Wo******@yahoo.ca> wrote in message
news:48************@individual.net...
"jacob navia" <ja***@jacob.remcomp.fr> wrote in message
news:44***********************@news.wanadoo.fr...
But basically why

typedef int int;

should be forbidden?


BTW Think about

typedef long long long long;

;-)


That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another couple
decades? Call them "long long long"? Or if we redefine "long long" to be
128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
ints will be a "long char" or "short short"? Or is a "short short" already
equal to a "char"?

All we need are "int float" and "double int" and the entire C type system
will be perfect! </sarcasm>


The 256 bit integer type has already been designated "long long long
long spam and long".

'nuff said.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Mar 25 '06 #38

P: n/a
ku****@wizard.net schrieb:
Stephen Sprunk wrote:
...
That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another couple
decades? Call them "long long long"? Or if we redefine "long long" to be
128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
ints will be a "long char" or "short short"? Or is a "short short" already
equal to a "char"?


We now have size-named types. That should reduce (but not,
unfortunately, eliminate) the likelihood of similar travesties in the
future.


Heh. We can hope.
All we need are "int float" and "double int" and the entire C type system
will be perfect! </sarcasm>


Hey! That's a half-way plausible syntax for declaring a fixed-point
type. ;-) All it needs is some way of specifying the number of digits
after the decimal point.


There is something called Embedded C for that,
http://www.embedded-c.org

Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Mar 25 '06 #39

P: n/a
On 2006-03-25, jacob navia <ja***@jacob.remcomp.fr> wrote:
santosh a écrit :
Keith Thompson wrote:
"Stephen Sprunk" <st*****@sprunk.org> writes:
[...]

That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another
couple decades? Call them "long long long"? Or if we redefine "long
long" to be 128-bit ints and "long" to be 64-bit ints, will a 32-bit
int be a "short long" or a "long short"? Maybe 32-bit ints will
become "short" and 16-bit ints will be a "long char" or "short short"?
Or is a "short short" already equal to a "char"?

All we need are "int float" and "double int" and the entire C type
system will be perfect! </sarcasm>

So how would you improve it?

Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly but
there's nothing that can be done. Calling a 128 bit integer as long
long long would be ridiculous.

lcc-win32 supports 128 bit integers. The type is named:

int128

Planned is support for 128 bit constants with

i128 m = 85566677766545455544455543344i128;


case-insensitive, i hope.
and

printf("%i128d",m);


How do you differentiate this from the valid standard format string
consisting of %i followed by the string "128d"? Maybe you should use
%I128d instead, like how microsoft does I64
Mar 25 '06 #40

P: n/a
On 25 Mar 2006 00:28:33 -0800, in comp.lang.c , "santosh"
<sa*********@gmail.com> wrote:
Keith Thompson wrote:

So how would you improve it?


Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly


And delightful to pronounce for our welsh colleagues.

Personallu I suspect they'll have to redefine the language
drastically.
Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Mar 25 '06 #41

P: n/a

Wojtek Lerch wrote:
"Eric Sosman" <Er*********@sun.com> wrote in message
news:e0*********@news1brm.Central.Sun.COM...
David R Tribble wrote On 03/24/06 11:01,:

On a related note, I've suggested in the past that duplicate
(redundant) typedefs be allowed as long as they are semantically
equivalent, e.g.:

typedef long mytype; // A
typedef long mytype; // B, error in C99
typedef long int mytype; // C, error in C99


It seems to me "semantically equivalent" might
open an unpleasant can of worms. For example, are

typedef unsigned int mytype;
typedef size_t mytype;

"semantically equivalent" on an implementation that
uses `typedef unsigned int size_t;'? What's really
wanted is "equivalence of intent," which seems a
harder notion to pin down.


Couldn't it simply use the same rules as declarations do -- i.e. require
compatible types?

extern unsigned int myvar;
extern size_t myvar;


If a type is given two typedefs with compatible
but different types, which one do you use? If
we see

typedef int F(int g(), int h(int));

F *f1;

typedef int F(int g(void), int h());

F *f2;

what are the types of f1 and f2? Which type is used
can affect what values may be assigned, etc.

Mar 25 '06 #42

P: n/a
<en******@yahoo.com> wrote in message
news:11*********************@i40g2000cwc.googlegro ups.com...
If a type is given two typedefs with compatible
but different types, which one do you use? If
we see

typedef int F(int g(), int h(int));

F *f1;

typedef int F(int g(void), int h());

F *f2;

what are the types of f1 and f2? Which type is used
can affect what values may be assigned, etc.


The composite type, just like for any other declaration.
Mar 25 '06 #43

P: n/a

Wojtek Lerch wrote:
<en******@yahoo.com> wrote in message
news:11*********************@i40g2000cwc.googlegro ups.com...
If a type is given two typedefs with compatible
but different types, which one do you use? If
we see

typedef int F(int g(), int h(int));

F *f1;

typedef int F(int g(void), int h());

F *f2;

what are the types of f1 and f2? Which type is used
can affect what values may be assigned, etc.


The composite type, just like for any other declaration.


The problem is the composite type isn't known when f1 is
declared. Any uses between f1's declaration and the second
definition of F would either have to use the first type or
require a second pass. Using the composite type for both
isn't how other declarations work either.

Mar 25 '06 #44

P: n/a
Jordan Abel a écrit :
On 2006-03-25, jacob navia <ja***@jacob.remcomp.fr> wrote:
lcc-win32 supports 128 bit integers. The type is named:

int128

Planned is support for 128 bit constants with [snip]
printf("%i128d",m);

How do you differentiate this from the valid standard format string
consisting of %i followed by the string "128d"? Maybe you should use
%I128d instead, like how microsoft does I64


good point!!!!

Thanks for this remark.

jacob

Mar 25 '06 #45

P: n/a
<en******@yahoo.com> wrote in message
news:11**********************@j33g2000cwa.googlegr oups.com...
Wojtek Lerch wrote:
<en******@yahoo.com> wrote in message
news:11*********************@i40g2000cwc.googlegro ups.com...
> If a type is given two typedefs with compatible
> but different types, which one do you use?
.... The composite type, just like for any other declaration.


The problem is the composite type isn't known when f1 is
declared. Any uses between f1's declaration and the second
definition of F would either have to use the first type or
require a second pass. Using the composite type for both
isn't how other declarations work either.


Well, how do they work? If a regular identifier is declared twice, its type
between the declarations may be different from the type after the second
declaration:

int arr1[], arr2[];
// arr1 and arr2 have the same, incomplete type here
int arr1[6], arr2[8];
// arr1 and arr2 have different, complete types here

Why couldn't the same rule apply to typedefs?
Mar 26 '06 #46

P: n/a
jacob navia <ja***@jacob.remcomp.fr> writes:
[...]
lcc-win32 supports 128 bit integers. The type is named:

int128


Which infringes on the user namespace. Is it defined in a
system-specific header?

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 26 '06 #47

P: n/a
On 2006-03-26, Keith Thompson <ks***@mib.org> wrote:
jacob navia <ja***@jacob.remcomp.fr> writes:
[...]
lcc-win32 supports 128 bit integers. The type is named:

int128


Which infringes on the user namespace. Is it defined in a
system-specific header?


probably should be something like __int128, typedef'd to int128_t in
stdint.h
Mar 26 '06 #48

P: n/a

On Sat, 25 Mar 2006, santosh wrote:
Keith Thompson wrote:
"Stephen Sprunk" <st*****@sprunk.org> writes:
[...]
That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another
couple decades? [...] All we need are "int float" and "double int" and the entire C type
system will be perfect! </sarcasm>


So how would you improve it?


Perhaps by adding Long or llong or Llong for 128 bit integers? Ugly
but there's nothing that can be done. Calling a 128 bit integer as
long long long would be ridiculous.


Obviously, 128 bits should be "long longer", and 256 bits should be
"long longest". Then, of course, 512 bits would be "longer longest" and
1024 bits would be "longest longest." That would cover us for another
few decades, at least. :)

-Arthur
sees problems with this proposal, unfortunately
Mar 26 '06 #49

P: n/a
jacob navia wrote:
Like

a = a;

it does nothing


That code does do something, if "a" is volatile.

Mar 27 '06 #50

134 Replies

This discussion thread is closed

Replies have been disabled for this discussion.