473,903 Members | 3,336 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

is "typedef int int;" illegal????

Hi

Suppose you have somewhere

#define BOOL int

and somewhere else

typedef BOOL int;

This gives

typedef int int;

To me, this looks like a null assignment:

a = a;

Would it break something if lcc-win32 would accept that,
maybe with a warning?

Is the compiler *required* to reject that?

Microsoft MSVC: rejects it.
lcc-win32 now rejects it.
gcc (with no flags) accepts it with some warnnings.

Thanks

jacob
---
A free compiler system for windows:
http://www.cs.virginia.edu/~lcc-win32

Mar 24 '06
134 9146
On 2006-03-29, jacob navia <ja***@jacob.re mcomp.fr> wrote:
Jordan Abel wrote:
On 2006-03-29, jacob navia <ja***@jacob.re mcomp.fr> wrote:
Douglas A. Gwyn a écrit :

jacob navia wrote:
>lcc-win32 supports 128 bit integers. The type is named:
>int128
We hope you defined the appropriate stuff in <stdint.h>
and <inttypes.h>, since that is what portable programs
will have to use instead of implementation-specific names.

Note also that you have made lcc-win32 non standards
conforman t. You should have used an identifier reserved
for use by the C implementation, not one that is
guarantee d to be available for the application.

The use of int128 is only there IF you

#include <int128.h>

Otherwise you can use the identifier int128 as you want.

jacob

In that case, why not use stdint.h and int128_t, int_least128_t, and
int_fast128_t?


Because that would force ALL users of stdint.h to accept int128_t and
all the associated machinery, what is probably not what all of them want.


Why? What machinery is associated with int128_t that c99 doesn't
_already_ say is permitted in stdint.h?

You'd have
[u]int128_t, [u]int_least128_t, [u]int_fast128_t, etc typedefs,
INT128_MIN, INT128_MAX, UINT128_MAX, and the associated LEAST and FAST
ones as well, INT128_C(x) and UINT128_C(x) in stdint.h

{PRI,SCN}[diouxX]{FAST,LEAST,}12 8 in inttypes.h

what else do you need?
But the name int128 is not "cast in stone" and since I suppose the names
intXXX_t are reserved I could use those.

Basically this type is implemented using lcc-win32 specific extensions
like operator overloading, what allows to easily define new types. This
extensions are disabled when you invoke the compiler under the "no
extensions" mode. If I would put the 128 bit integers in the stdint
header, the operator overloading required would not work under the "ansi
c" environment, and problems would appear.
Why not implement it as a standard type so that it can _always_ be used,
with nothing but an #ifdef INT128_MAX to check if it's present?
That is why I use a special header that will be used only by people
the want those integer types.

Of course there is a strict ANSI C interface for 128 bit integers, but
if you use it, you would have to write

int128 a,b,c;
...
c = i128add(a,b);

instead of

c = a+b;


why? why not implement it as a standard type, with the compiler knowing
about it?

#ifdef INT_LEAST128_MA X
int_least128_t a,b,c;
c = a+b;
#else
#error No 128-bit integer type available
#endif
Mar 29 '06 #91
jacob navia wrote:
Jordan Abel wrote:
In that case, why not use stdint.h and int128_t, int_least128_t, and
int_fast128_t? Because that would force ALL users of stdint.h to accept int128_t and
all the associated machinery, what is probably not what all of them want.


If the programs don't try to use the type then the extra definitions
are of no consequence.
... If I would put the 128 bit integers in the stdint
header, the operator overloading required would not work under the "ansi
c" environment, and problems would appear. That is why I use a special
header that will be used only by people the want those integer types.


You ought to rethink your design. If your compiler knows the
type as __int128 (for example) then <stdint.h> need only refer
to that name. You may have to define a testable macro for
your extended environment in order for the standard header to
know whether that type is supported or not, but that kind of
thing is quite common in implementations already.
Mar 30 '06 #92
ku****@wizard.n et wrote [re "long long"]:
You consider that an advantage. I think it's a disadvantage to have a
type whose miniminum required size corresponds to 64 bits, but giving
it a name which does not make that fact explicit.
Then you should use <stdint.h>, which was introduced at the
same time. None of the "keyword" types has ever had a
specific size embedded in its name.
Also, I've heard it criticised because of the fact that it's form makes
it something unique in the standard: a doubled keyword that is neither
a syntax error nor eqivalent to the corresponding un-doubled keyword. I
don't know much about the internals of compiler design, but I've seen
comments on someone who thought he did, who claimed that this
distinction unnecessarily imposed an (admittedly small) additional
level of complexity on the parser.


If a parser generator is used (e.g. yacc) there is no significant
problem. If a hand-coded parser is used, it's nearly trivial to
handle. (Look ahead one token, for example. In Ritchie's PDP-11
C compiler a "long" counter was incremented, and there was no
diagnostic for multiple "longs". It is trivial to test for a
count of 1, 2, or many and do the right thing for each case.)
Mar 30 '06 #93
Douglas A. Gwyn wrote:
ku****@wizard.n et wrote [re "long long"]:
You consider that an advantage. I think it's a disadvantage to have a
type whose miniminum required size corresponds to 64 bits, but giving
it a name which does not make that fact explicit.
Then you should use <stdint.h>, which was introduced at the
same time.


I plan to, should our client ever give us permission to use anything
more advanced than C94. However, I wasn't complaining about the absence
of those types - I know they exists. I was objecting to the presence of
"long long", and in particular to it's presence in some pre-C99
implementations . It's that presence which forced the C committee to
accept "long long" in the same revision as the preferred alternatives.
... None of the "keyword" types has ever had a
specific size embedded in its name.


And, in retrospect, I don't approve of that fact.

Mar 30 '06 #94
David R Tribble wrote:
I'm still waiting for a standard macro that tells me about endianness
(but that's a topic for another thread).


Wojtek Lerch wrote:
One macro, or one per integer type? C doesn't disallow systems where
some types are big endian and some little endian.

C doesn't even disallow "mixed endian" -- any permutation of bits is OK.
Would you just classify those as "other", or do you have something more
complicated in mind? Or would you just ban them?


David R Tribble wrote:
Something along the lines of:
http://david.tribble.com/text/c9xmach.txt


Wojtek Lerch wrote: I have to say that I find it rather vague and simplistic, and can't find
where it answers my questions. I have absolutely no clue how you wanted to
handle implementations that are neither clearly little-endian nor clearly
big-endian. You didn't propose to ban them, did you?
No, that's why there are three endianness macros. This allows for,
say, the PDP-11 mixed-endian 'long int' type:

#define _ORD_BIG 0 /* Big-endian */
#define _ORD_LITTLE 0 /* Little-endian */

#define _ORD_BITF_HL 0 /* Bitfield fill order */
#define _ORD_BYTE_HL 0 /* Byte order within shorts */
#define _ORD_WORD_HL 1 /* Word order within longs */

What about implementations with one-byte shorts?
Obviously the macro names could be better.

What if the bit order within a short doesn't match the bit order in a char?
What if the byte order within a two-byte short doesn't match the byte order
within a half of a four-byte long? What about the halves of an int? What about
implementations with three-byte longs? What if the most significant bits
sit in the middle byte? Or if the three most significant bits are mapped to
the least significant bit of the three bytes?
Then we need more macros with better names.
You're not saying that this is an unsolvable problem, are you?

Perhaps because they all made the incorrect assumption that in every
conforming implementation, every integer type must necessarily be either
little endian or big endian?
I didn't make that assumption.

Personally, I think it would be both easier and more useful not to try to
classify all types on all implementations , but instead to define names for
big- and little-endian types and make them all optional. For instance:
uint_be32_t -- a 32-bit unsigned type with no padding bits and a
big-endian representation, if such a type exists.


How do you tell if those types are not implemented?

More to the point, how do you tell portably what byte order plain
'int' is implemented with?

-drt

Mar 30 '06 #95
Douglas A. Gwyn wrote:
... None of the "keyword" types has ever had a
specific size embedded in its name.


Kuyper wrote: And, in retrospect, I don't approve of that fact.


Then you probably don't approve of Java, Perl, awk, ksh, FORTRAN,
BASIC, etc., or most other programming languages, either.

-drt

Mar 30 '06 #96
"David R Tribble" <da***@tribble. com> wrote in message
news:11******** **************@ g10g2000cwb.goo glegroups.com.. .
David R Tribble wrote:
Something along the lines of:
http://david.tribble.com/text/c9xmach.txt


Wojtek Lerch wrote:
What if the bit order within a short doesn't match the bit order in a
char?
What if the byte order within a two-byte short doesn't match the byte
order
within a half of a four-byte long? What about the halves of an int? What
about
implementations with three-byte longs? What if the most significant bits
sit in the middle byte? Or if the three most significant bits are mapped
to
the least significant bit of the three bytes?


Then we need more macros with better names.
You're not saying that this is an unsolvable problem, are you?


Pretty much, depending on what exactly you call the problem and what kind of
a solution you find acceptable.

Let's concentrate on implementations that have 16-bit short types with no
padding bits. There are 20922789888000 possible permutations of 16 bits,
and the C standard doesn't disallow any of them. Even though it's
theoretically possible to come up with a system of macros allowing programs
to distinguish all the permutations, I don't think it would be very useful
or practical. For all practical purposes, a distinction between big endian,
little endian, and "other" is sufficient. There are no existing "other"
implementations anyway.

In practice, a simple one-bit solution like your is perfectly fine.
Unfortunately, it only covers practical implementations ; therefore, it
wouldn't be acceptable as a part of the standard.
Perhaps because they all made the incorrect assumption that in every
conforming implementation, every integer type must necessarily be either
little endian or big endian?


I didn't make that assumption.


Correct me if I'm wrong, but you did seem to make the assumption that there
are only two possible byte orders within a short, and that there are only
two possible "word orders" within a long, and that knowing those two bits of
information (along with the common stuff from <limits.h>) gives you complete
or at least useful knowledge about the bit order of all integer types (in
C89).

If I indeed misunderstood something, could you explain how you would use
your macros in a program to distinguish between implementations where an
unsigned short occupies two 9-bit bytes, has two padding bits, and
represents the value 0x1234 as

(a) 0x12, 0x34 ("big endian", with a padding bit at the top of each byte)
(b) 0x24, 0x68 ("big endian", with a padding bit at the bottom of each
byte)
(b) 0x22, 0x64 ("big endian", with a padding bit in the middle of each
byte)
(c) 0x34, 0x12 ("little endian", padding at the top)
(d) 0x68, 0x24 ("little endian", padding at the bottom)
(e) 0x23, 0x14 ("middle endian", with the middle bits in the first byte, a
padding bit at the top of each byte)
Personally, I think it would be both easier and more useful not to try to
classify all types on all implementations , but instead to define names
for
big- and little-endian types and make them all optional. For instance:
uint_be32_t -- a 32-bit unsigned type with no padding bits and a
big-endian representation, if such a type exists.


How do you tell if those types are not implemented?


The same way as any other type from <stdint.h> -- #if
defined(UINT_BE 32_MAX).
More to the point, how do you tell portably what byte order plain
'int' is implemented with?


You don't. It doesn't make sense to talk about the "byte order" without
assuming that the value bits are grouped into bytes according to their
value; and that assumption is not portable. At least not in theory.

Using your method, how do you tell where the padding bits are located? If
you can't, how useful is it to know the "byte order"?

Mar 30 '06 #97


David R Tribble wrote On 03/30/06 16:55,:
[...]

Wojtek Lerch wrote:
I have to say that I find it rather vague and simplistic, and can't find
where it answers my questions. I have absolutely no clue how you wanted to
handle implementations that are neither clearly little-endian nor clearly
big-endian. You didn't propose to ban them, did you?

No, that's why there are three endianness macros. This allows for,
say, the PDP-11 mixed-endian 'long int' type:

#define _ORD_BIG 0 /* Big-endian */
#define _ORD_LITTLE 0 /* Little-endian */


Does the Standard require that the 1's bit and the
2's bit of an `int' reside in the same byte? Or is the
implementation free to scatter the bits of the "pure
binary" representation among the different bytes as it
pleases? (It must, of course, scatter the corresponding
bits of signed and unsigned versions in the same way.)

If the latter, I think there's the possibility (a
perverse possibility) of a very large number of permitted
"endianness es," something like

(sizeof(type) * CHAR_BIT) !
-----------------------------
(CHAR_BIT !) ** sizeof(type)

Argument: There are `sizeof(type) * CHAR_BIT' bits (value,
sign, and padding) in the object, so the number of ways to
permute the bits is the factorial of that quantity. But C
cannot detect the arrangement of individual bits within a
byte, so each byte of the object divides the number of
detectably different arrangements by `CHAR_BIT!'.

For an `int' made up of four eight-bit bytes, this
gives 32! / (8! ** 4) ~= 1e17 "endianness es," or one tenth
of a billion billion.

--
Er*********@sun .com

Mar 30 '06 #98
On 2006-03-30, Wojtek Lerch <Wo******@yahoo .ca> wrote:
(a) 0x12, 0x34 ("big endian", with a padding bit at the top of each
byte)
(b) 0x24, 0x68 ("big endian", with a padding bit at the bottom of each
byte)
(b) 0x22, 0x64 ("big endian", with a padding bit in the middle of each
byte)
(c) 0x34, 0x12 ("little endian", padding at the top)
(d) 0x68, 0x24 ("little endian", padding at the bottom)
(e) 0x23, 0x14 ("middle endian", with the middle bits in the first byte, a
padding bit at the top of each byte)


You forgot 0x09, 0x34, big-endian with the padding bits at the top of
the word, which is, to me, the most obvious of all.
Mar 31 '06 #99
"Jordan Abel" <ra*******@gmai l.com> wrote in message
news:sl******** *************** @random.yi.org. ..
You forgot 0x09, 0x34, big-endian with the padding bits at the top of
the word, which is, to me, the most obvious of all.


The truth is I didn't think of it because my example originally had 8-bit
bytes and no padding. But as far as my point is concerned, it doesn't
matter which combination is the most obvious one, only that there are
zillions of valid combinations.
Mar 31 '06 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
3746
by: Fao | last post by:
Hi, I am in my first year of C++ in college and my professor wants me to Write a Program with multiple functions,to input two sets of user-defined data types: One type named 'Sign' declared by "typedef" to contain only either +10 or -10 and the other type named Color declared by "enum" to contain only black, blue, purple, red, white, and yellow.
0
9997
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9845
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10872
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10981
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
10499
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
1
8047
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5893
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
6085
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
2
4307
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.