473,780 Members | 2,229 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

is "typedef int int;" illegal????

Hi

Suppose you have somewhere

#define BOOL int

and somewhere else

typedef BOOL int;

This gives

typedef int int;

To me, this looks like a null assignment:

a = a;

Would it break something if lcc-win32 would accept that,
maybe with a warning?

Is the compiler *required* to reject that?

Microsoft MSVC: rejects it.
lcc-win32 now rejects it.
gcc (with no flags) accepts it with some warnnings.

Thanks

jacob
---
A free compiler system for windows:
http://www.cs.virginia.edu/~lcc-win32

Mar 24 '06
134 9086

Joe Wright wrote:
Jordan Abel wrote:
On 2006-03-31, Joe Wright <jo********@com cast.net> wrote:
tedu wrote:
Wojtek Lerch wrote:
> But of course you can detect the order, at least in cases where padding bits
> don't obscure the view. Take a look at the representation of a power of
> two. If there are no padding bits, only one of the bytes has a non-zero
> value, and that value is a power of two as well. And of course you can
> easily detect which power of two it is.
how could you do this?

assume i have a 4 bit unsigned int, to make things easy. the bits are
ordered 1423. so decimal to binary:
1 == 1000
2 == 0010
3 == 1010
...

how can you detect that 1 is bit pattern 1000?

Bits are not arbitrarily ordered the way bytes might be. Your four bits
are ordered 3210 (as powers of 2) and you couldn't change it if you
wanted to.


But they could be ordered differently when you look at it as an int than
when you look at it as chars.


No, they can't. The bits of a byte are ordered as they are. The bit
order cannot change between int and char.


Citation please?

I don't see anything in the standard that requires the value bits of
any two unrelated integer types to be in the same order. It's certainly
feasible, though very expensive, for an implementation to have 'int'
values represented using the bits within each byte in the reverse
order with which those bits would be interpreted as unsigned char. Such
an implementation would be very unnatural, but it would be perfectly
feasible, and could be done in a way that's perfectly conforming. If
you can find a clause in the standard prohibiting such an
implementation, please cite it.

It would be much more plausible at the hardware level: I think it would
be quite feasible to design a chip where instructions that work on
2-byte words interpret the values of bits in precisely the reverse
order of the way that they're interpreted by instructions that work on
one byte at a time. I can't come up with any good reason to do so, but
I suspect it could be done fairly efficiently, achieving almost the
same speeds as more rationally-designed hardware.

The point isn't that there's any good reason to do this; I can't come
up with any. The point is that the standard deliberately fails to
specify such details. I believe that the people who wrote the standard
worked on the principle that it should avoid specifying anything that
it doesn't have a pressing need to specify. That makes it possible to
implement C on a wide variety of platforms, including ones using
technologies that didn't even exist when the standard was first
written. Can you think of any reason why the standard should specify
that unrelated integer types order their bits within each byte the same
way?

Apr 1 '06 #121
Wojtek Lerch wrote:
"Joe Wright" <jo********@com cast.net> wrote in message
news:Wa******** ************@co mcast.com...
..The byte is the atomic object. The bits within the byte can't be moved
around like the bytes in a long. A byte with value one hundred will have a
binary bitset of 01100100 on all systems where byte is eight bits. And you
couldn't change it if you wanted to.


That's simply because you insist on displaying the bits in the conventional
order, with the most significant one on the left and the least significant
one on the right. By the same token, a 16-bit unsigned short with value
three hundred has to be displayed as the bit pattern 000000010010110 0, and
there's no way to change that. But if you decide to order the bits
according to how they're laid out in the bytes, you might end up with
something like 00000001 00101100, or 00101100 00000001, or maybe even
11000010 00010000.


Displaying bits of a byte in conventional order is a "good thing"
because it allows you and I to know what we are talking about. My main
point is that at the byte level, we must do that. The value five is
always 00000101 at the byte level. Always.

CPU "design" will determine the byte order of objects in memory. The
"design" cannot determine the bit order of a byte simply because byte is
the finest granularity available. The CPU cannot address a 'bit'.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Apr 1 '06 #122
pete wrote:
Joe Wright wrote:
The bit order cannot change between int and char.


I don't think that there's any requirement
for the two lowest order bits of an int type object,
to be in the same byte,
if sizeof(int) is greater than one.

Ok, I'll play. Assume sizeof (int) is 2.

int i;
char c = 3;

Assume c looks like 00000011

i = c;

I suppose little endian i looks like 00000011 00000000
and big endian i looks like 00000000 00000011

Your turn.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Apr 1 '06 #123
Joe Wright wrote:

pete wrote:
Joe Wright wrote:
The bit order cannot change between int and char.


I don't think that there's any requirement
for the two lowest order bits of an int type object,
to be in the same byte,
if sizeof(int) is greater than one.

Ok, I'll play. Assume sizeof (int) is 2.

int i;
char c = 3;

Assume c looks like 00000011

i = c;

I suppose little endian i looks like 00000011 00000000


If the two lowest order bits, are in seperate bytes, then it's:
00000001 00000001
in either endian

--
pete
Apr 1 '06 #124
pete wrote:

Joe Wright wrote:

pete wrote:
Joe Wright wrote:

> The bit order cannot change between int and char.

I don't think that there's any requirement
for the two lowest order bits of an int type object,
to be in the same byte,
if sizeof(int) is greater than one.

Ok, I'll play. Assume sizeof (int) is 2.

int i;
char c = 3;

Assume c looks like 00000011

i = c;

I suppose little endian i looks like 00000011 00000000


If the two lowest order bits, are in seperate bytes, then it's:
00000001 00000001
in either endian


For sizeof(int) == 2, CHAR_BIT == 8

c = 0: 00000000 00000000
c = 1: 00000001 00000000
c = 2: 00000000 00000001
c = 3: 00000001 00000001
c = 4: 00000010 00000000
c = 5: 00000011 00000000
c = 6: 00000010 00000001
c = 7: 00000011 00000001
c = 8: 00000000 00000010

--
pete
Apr 1 '06 #125
"pete" <pf*****@mindsp ring.com> wrote in message
news:44******** ***@mindspring. com...
Joe Wright wrote: ....
int i;
char c = 3;

Assume c looks like 00000011

i = c;

I suppose little endian i looks like 00000011 00000000


If the two lowest order bits, are in seperate bytes, then it's:
00000001 00000001


Why couldn't it be

10000000 10000000

or

00000001 10000000

?
in either endian


I don't think "endian" applies here. How do you define "endian" without
assuming that bits are grouped in bytes according to their value?
Apr 1 '06 #126
Wojtek Lerch wrote:

"pete" <pf*****@mindsp ring.com> wrote in message
news:44******** ***@mindspring. com...
Joe Wright wrote: ...
int i;
char c = 3;

Assume c looks like 00000011

i = c;

I suppose little endian i looks like 00000011 00000000


If the two lowest order bits, are in seperate bytes, then it's:
00000001 00000001


Why couldn't it be

10000000 10000000

or

00000001 10000000

?


Those are fine.
in either endian


I don't think "endian" applies here.
How do you define "endian" without
assuming that bits are grouped in bytes according to their value?


The bits *are* grouped in bytes according to their values.

It's just that
"the two lowest order bits, are in seperate bytes"
is an incomplete specification.

--
pete
Apr 1 '06 #127
Joe Wright wrote:
Wojtek Lerch wrote:
"Joe Wright" <jo********@com cast.net> wrote in message
news:Wa******** ************@co mcast.com...
..The byte is the atomic object. The bits within the byte can't be moved
around like the bytes in a long. A byte with value one hundred will have a
binary bitset of 01100100 on all systems where byte is eight bits. And you
couldn't change it if you wanted to.
That's simply because you insist on displaying the bits in the conventional
order, with the most significant one on the left and the least significant
one on the right. By the same token, a 16-bit unsigned short with value
three hundred has to be displayed as the bit pattern 000000010010110 0, and
there's no way to change that. But if you decide to order the bits
according to how they're laid out in the bytes, you might end up with
something like 00000001 00101100, or 00101100 00000001, or maybe even
11000010 00010000.


Displaying bits of a byte in conventional order is a "good thing"
because it allows you and I to know what we are talking about. My main
point is that at the byte level, we must do that. The value five is
always 00000101 at the byte level. Always.


Displaying bits in the conventional order is often a "good thing"
because it simplifies communication by allowing you to assume that the
convention doesn't need to be explained. But that doesn't make it the
only possible order, or even the only useful order. In a discussion
about serial transmission of data, it may be more appropriate to
display the bits in the order they're transmitted; and if the protocol
being discussed transmits the least significant bit first, you'll end
up displaying a byte with the value five as 10100000. Or maybe just
1010000, if it's a seven-bit protocol. Similarly, if you were
explaining how the bits are represented by the state of transistors in
some chip, you might prefer to display them in the order they're laid
out in the chip. There are many ways to order the bits of a byte, and
there's no rule in the C standard that forbids displaying them in an
unconventional order.
CPU "design" will determine the byte order of objects in memory. The
"design" cannot determine the bit order of a byte simply because byte is
the finest granularity available. The CPU cannot address a 'bit'.


*Which* CPU cannot address a bit? My understanding is that some can.
Anyway, what does that have to do with the C standard?

The bits are just some physical circuits in silicon. Some operations
of the CPU are designed to implement some mathematical operations, in
which case the bits are designed to represent some mathematical values
-- typically, various powers of two. Depending on the operation, the
same physical bit may represent different values: for
instance, the bit that represents the value 1 in an 8-bit operation may
represent the value 0x100 in a 16-bit operation. The exact rules of
how the various operations assign values to the various pieces of
silicon are not the busines of the C standard; the only thing the C
standard does require is that if you look at the contents of a region
of memory as a single value of an integer type T and then as sizeof(T)
values of type unsigned char, then there must be a mapping between
those values that can be described in terms of the bits of the binary
representations of the values. The text doesn't say how the mapping
must order the bits, only that it must exist. If you believe that
there is a requirement there that I have missed, please let me know
where to find it.

Apr 1 '06 #128
Joe Wright wrote:
....
Displaying bits of a byte in conventional order is a "good thing"
because it allows you and I to know what we are talking about. My main
point is that at the byte level, we must do that. The value five is
always 00000101 at the byte level. Always.
An implementation can generate code for integer arithmetic which
handles a bit pattern of 00000101 as if it represented, for example, a
value of 160. This is not at all "natural", or efficient, but it could
still conform to the C standard. The standard doesn't say what it would
need to say to make it nonconforming. That fact is precisely what
ensures that it would also be feasible (though difficult) to create a
conforming implementation of C for a platform with which implements
trinary arithmetic or binary-coded-decimal (BCD) arithmetic at the
hardware level (I mention those two, out an infinity of other
possibilities, because actual work has been done on both of those kinds
of hardware, though I'm not sure trinary computers were ever anything
but a curiosity).
CPU "design" will determine the byte order of objects in memory. The
"design" cannot determine the bit order of a byte simply because byte is
the finest granularity available. The CPU cannot address a 'bit'.


I agree about bits not being addressable (at least on most
architectures - I remember vaguely hearing about machines where they
were addressable). However, the implementation can generate code which
extracts the bits, and inteprets them in any fashion that the
implementation chooses, regardless of what interpretation the hardware
itself uses for those bits. Any hardware feature which made that
impossible would also render it impossible to implement C's bitwise
operators, because support for arbitrary reinterpretatio n of bit
patterns can be built up out of those operators.

And, as I said before, nothing prevents the hardware itself from
interpreting bit patterns in different ways depending upon which
instructions are used, or which mode of operation has been turned on. I
know I've seen hardware with the ability to interpret bytes as either
binary or BCD, depending upon which instructions were used.

Apr 1 '06 #129

Joe Wright wrote:
pete wrote:
Joe Wright wrote:
The bit order cannot change between int and char.
I don't think that there's any requirement
for the two lowest order bits of an int type object,
to be in the same byte,
if sizeof(int) is greater than one.

Ok, I'll play. Assume sizeof (int) is 2.


What is is you're "playing"? You didn't address the point he raised.
int i;
char c = 3;

Assume c looks like 00000011

i = c;

I suppose little endian i looks like 00000011 00000000
and big endian i looks like 00000000 00000011


And I suppose that another possiblity is that i looks like 00010000
00000100. What does the standard say that rules out my supposition?
What does it say to make your two suppositions the only possibilities?
You aren't "playing" until you actual cite the relevant text which my
supposition would violate.

Apr 1 '06 #130

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
3736
by: Fao | last post by:
Hi, I am in my first year of C++ in college and my professor wants me to Write a Program with multiple functions,to input two sets of user-defined data types: One type named 'Sign' declared by "typedef" to contain only either +10 or -10 and the other type named Color declared by "enum" to contain only black, blue, purple, red, white, and yellow.
0
9636
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9474
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10139
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10075
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8961
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
5373
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
5504
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
4037
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
2869
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.