473,473 Members | 1,807 Online
Bytes | Software Development & Data Engineering Community
Create Post

Home Posts Topics Members FAQ

is "typedef int int;" illegal????

Hi

Suppose you have somewhere

#define BOOL int

and somewhere else

typedef BOOL int;

This gives

typedef int int;

To me, this looks like a null assignment:

a = a;

Would it break something if lcc-win32 would accept that,
maybe with a warning?

Is the compiler *required* to reject that?

Microsoft MSVC: rejects it.
lcc-win32 now rejects it.
gcc (with no flags) accepts it with some warnnings.

Thanks

jacob
---
A free compiler system for windows:
http://www.cs.virginia.edu/~lcc-win32

Mar 24 '06
134 8907
Old Wolf wrote:

jacob navia wrote:
Like

a = a;

it does nothing


That code does do something, if "a" is volatile.


It's undefined if (a) is indeterminate.

--
pete
Mar 27 '06 #51
Wojtek Lerch wrote:
BTW Think about

typedef long long long long;

;-)


Stephen Sprunk wrote: That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another couple
decades? Call them "long long long"? Or if we redefine "long long" to be
128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
ints will be a "long char" or "short short"? Or is a "short short" already
equal to a "char"?

All we need are "int float" and "double int" and the entire C type system
will be perfect! </sarcasm>


It's interesting to note that most implementations (all of them I've
ever seen, in fact) only provide three of the four standard int type
sizes, with two of the four being the same size. For example,
consider the following typical choices of type sizes for various
CPU word sizes:

word | char | short| int | long | long long
-----+------+------+------+------+----------
8 | 8 | 16* | 16* | 32 | 64
9 | 9 | 18* | 18* | 36 | 72
9 | 9 | 18 | 36* | 36* | 72
12 | 8 | 24* | 24* | 48 | 96
16 | 8 | 16* | 16* | 32 | 64
16 | 8 | 16 | 32* | 32* | 64
18 | 9 | 18* | 18* | 36 | 72
18 | 9 | 18 | 36* | 36* | 72
20 | 10 | 20* | 20* | 40 | 80
24 | 8 | 24* | 24* | 48 | 96
32 | 8 | 16 | 32* | 32* | 64
36 | 9 | 18 | 36* | 36* | 72
40 | 8 | 20 | 40* | 40* | 80
60 | 10 | 30 | 60* | 60* | 120
64 | 8 | 16 | 32 | 64* | 64*
64 | 8 | 16 | 64* | 64* | 128

I've marked the duplicate type sizes in each row. Notice
that every row has two types of the same size.

So it's tempting to conclude that adding another int type
size to C would simply force compiler writers to provide
four actually different int sizes instead of only three.
Personally, I don't think we'll ever see 128-bit ints
as a standard C datatype, or to put it another way,
I don't think we'll ever see four standard int sizes in C.

But *if* that ever does happen, we'll simply call them
int128_t, etc., since C99 already has those types.

-drt

Mar 27 '06 #52


David R Tribble wrote On 03/27/06 11:44,:

It's interesting to note that most implementations (all of them I've
ever seen, in fact) only provide three of the four standard int type
sizes, with two of the four being the same size. [...]


In a 64-bit program for SPARC there are four different
integer widths: 8-bit char, 16-bit short, 32-bit int, and
64-bit long and long long.

My (possibly faulty) recollection has it that the DEC
Alpha used the same arrangement (without "long long") in
its compilers for OSF/1.

--
Er*********@sun.com

Mar 27 '06 #53
David R Tribble wrote:
It's interesting to note that most implementations (all of them I've
ever seen, in fact) only provide three of the four standard int type
sizes, with two of the four being the same size. [...]


Eric Sosman wrote: In a 64-bit program for SPARC there are four different
integer widths: 8-bit char, 16-bit short, 32-bit int, and
64-bit long and long long.
Again, that's only three int sizes (four if you count 'char' as an int
type, which I'm not).

A 64-bit CPU could come the closest to having all four int
sizes: 16/32/64/128. But I don't know of any 64-bit C compilers
that do.

My (possibly faulty) recollection has it that the DEC
Alpha used the same arrangement (without "long long") in
its compilers for OSF/1.


Yes, the DEC Alpha 64-bit CPU for OSF/1 used 16/32/64 ints
with 64-bit pointers (it did not have 'long long'). If it had had
128-bit 'long long', it would have been the first in my experience
with four different int sizes, but it didn't.

-drt

Mar 27 '06 #54


David R Tribble wrote On 03/27/06 12:35,:
David R Tribble wrote:
It's interesting to note that most implementations (all of them I've
ever seen, in fact) only provide three of the four standard int type
sizes, with two of the four being the same size. [...]


Eric Sosman wrote:
In a 64-bit program for SPARC there are four different
integer widths: 8-bit char, 16-bit short, 32-bit int, and
64-bit long and long long.

Again, that's only three int sizes (four if you count 'char' as an int
type, which I'm not).


I was misled by the table in your post, whose
column headers listed five integer types (plus "word").

(Also: Why in the world do you exclude `char' from
the repertoire of "standard int types?" Are you put off
by the uncertainty over its signedness, perhaps? When I
spotted the mismatch between your "four standard int types"
and the six columns in the table, I quickly excluded "word"
but then guessed you'd forgotten to count `long long'. It
never occurred to me that you'd, er, recharacterize `char'
as a non-integer -- and it seems a bizarre stance for a C
programmer to take.)

--
Er*********@sun.com

Mar 27 '06 #55
On 2006-03-27, Eric Sosman <Er*********@sun.com> wrote:


David R Tribble wrote On 03/27/06 11:44,:

It's interesting to note that most implementations (all of them I've
ever seen, in fact) only provide three of the four standard int type
sizes, with two of the four being the same size. [...]
In a 64-bit program for SPARC there are four different
integer widths: 8-bit char, 16-bit short, 32-bit int, and
64-bit long and long long.


I don't think he was counting char, when he talked about "three" of
"four".

My (possibly faulty) recollection has it that the DEC
Alpha used the same arrangement (without "long long") in
its compilers for OSF/1.

Mar 27 '06 #56
"David R Tribble" <da***@tribble.com> writes:
David R Tribble wrote:
It's interesting to note that most implementations (all of them I've
ever seen, in fact) only provide three of the four standard int type
sizes, with two of the four being the same size. [...]


Eric Sosman wrote:
In a 64-bit program for SPARC there are four different
integer widths: 8-bit char, 16-bit short, 32-bit int, and
64-bit long and long long.


Again, that's only three int sizes (four if you count 'char' as an int
type, which I'm not).


Well, you should, because it is.

8/16/32/64 is fairly common these days. Making full use of all 5
integer type sizes, assuming 8-bit char, would of course require
8/16/32/64/128 -- and I've never seen a system with 128-bit integers.

When 32-bit integers and pointers were common, it wasn't difficult to
foresee that they would become inadequate, and that we'd move to 64
bits. Now that 64-bit integers and pointers are becoming widespread,
I suspect we've reached a plateau; I don't think we'll move on to 128
bits for several decades. A 16-exabyte address space will keep me
happy for quite a while; even where I work, we're barely dealing with
petabytes, and that's not directly addressible.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 27 '06 #57
On 2006-03-27, Eric Sosman <Er*********@sun.com> wrote:
When I spotted the mismatch between your "four standard int types" and
the six columns in the table, I quickly excluded "word" but then
guessed you'd forgotten to count `long long'. It never occurred to me
that you'd, er, recharacterize `char' as a non-integer -- and it seems
a bizarre stance for a C programmer to take.)


The keyword "int" is not allowed as part of its type name, therefore it
is arguable that it is not an "int type" despite being an "integer
type".
Mar 27 '06 #58
On 2006-03-27, Keith Thompson <ks***@mib.org> wrote:
"David R Tribble" <da***@tribble.com> writes:
David R Tribble wrote:
It's interesting to note that most implementations (all of them I've
ever seen, in fact) only provide three of the four standard int type
sizes, with two of the four being the same size. [...]


Eric Sosman wrote:
In a 64-bit program for SPARC there are four different
integer widths: 8-bit char, 16-bit short, 32-bit int, and
64-bit long and long long.


Again, that's only three int sizes (four if you count 'char' as an int
type, which I'm not).


Well, you should, because it is.

8/16/32/64 is fairly common these days. Making full use of all 5
integer type sizes, assuming 8-bit char, would of course require
8/16/32/64/128 -- and I've never seen a system with 128-bit integers.

When 32-bit integers and pointers were common, it wasn't difficult to
foresee that they would become inadequate, and that we'd move to 64
bits. Now that 64-bit integers and pointers are becoming widespread,
I suspect we've reached a plateau; I don't think we'll move on to 128
bits for several decades. A 16-exabyte address space will keep me
happy for quite a while; even where I work, we're barely dealing with
petabytes, and that's not directly addressible.


a 128-bit word size might make sense, though, for a specialized system
that is intended to mainly work with high-precision floating point,
though. But I'll agree that probably LP64/LLP64 are going to be the most
common model for hosted systems from here on out. [those are 8/16/32/64
with 64- and 32-bit long, respectively, and 64-bit pointers.]
Mar 27 '06 #59
Jordan Abel <ra*******@gmail.com> writes:
On 2006-03-27, Eric Sosman <Er*********@sun.com> wrote:
When I spotted the mismatch between your "four standard int types" and
the six columns in the table, I quickly excluded "word" but then
guessed you'd forgotten to count `long long'. It never occurred to me
that you'd, er, recharacterize `char' as a non-integer -- and it seems
a bizarre stance for a C programmer to take.)


The keyword "int" is not allowed as part of its type name, therefore it
is arguable that it is not an "int type" despite being an "integer
type".


The language makes no such distinction. Type char is an integer type;
the only thing that's really special about it is that plain char may
be either signed or unsigned.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 27 '06 #60
Eric Sosman wrote:
When I spotted the mismatch between your "four standard int types" and
the six columns in the table, I quickly excluded "word" but then
guessed you'd forgotten to count `long long'. It never occurred to me
that you'd, er, recharacterize `char' as a non-integer -- and it seems
a bizarre stance for a C programmer to take.)


Jordan Abel wrote: The keyword "int" is not allowed as part of its type name, therefore it
is arguable that it is not an "int type" despite being an "integer
type".


Exactly. Yes, 'char' is an integer type of C, but it's not an 'int'
type (because 'int' is not allowed as part of its type name).

Doesn't matter anyway; my point is still true that all C compilers
to date (at least those I'm aware of) support two standard integer
types of identical size. Three out of four or four out of five, either
way, there appears to always be a redundant type.

-drt

Mar 27 '06 #61

Jordan Abel wrote:
On 2006-03-27, Eric Sosman <Er*********@sun.com> wrote:
When I spotted the mismatch between your "four standard int types" and
the six columns in the table, I quickly excluded "word" but then
guessed you'd forgotten to count `long long'. It never occurred to me
that you'd, er, recharacterize `char' as a non-integer -- and it seems
a bizarre stance for a C programmer to take.)


The keyword "int" is not allowed as part of its type name, therefore it
is arguable that it is not an "int type" despite being an "integer
type".


Perhaps; but why bother talking about 'int' types in the first place?
Why not discuss "integer" types instead?

Mar 27 '06 #62


David R Tribble wrote On 03/27/06 15:23,:
Eric Sosman wrote:
When I spotted the mismatch between your "four standard int types" and
the six columns in the table, I quickly excluded "word" but then
guessed you'd forgotten to count `long long'. It never occurred to me
that you'd, er, recharacterize `char' as a non-integer -- and it seems
a bizarre stance for a C programmer to take.)


Jordan Abel wrote:
The keyword "int" is not allowed as part of its type name, therefore it
is arguable that it is not an "int type" despite being an "integer
type".

Exactly. Yes, 'char' is an integer type of C, but it's not an 'int'
type (because 'int' is not allowed as part of its type name).

Doesn't matter anyway; my point is still true that all C compilers
to date (at least those I'm aware of) support two standard integer
types of identical size. Three out of four or four out of five, either
way, there appears to always be a redundant type.


Isn't the Alpha under OSF/1 (already mentioned) a
counterexample? It's "four out of four" (or "three out
of three" if you count un-char-itably). If you want to
look from the other angle, it has no "redundant" type.

--
Er*********@sun.com

Mar 27 '06 #63


Jordan Abel wrote On 03/27/06 15:07,:

a 128-bit word size might make sense, though, for a specialized system
that is intended to mainly work with high-precision floating point,
though. [...]


Not a mere theoretical possibility: DEC VAX supported
four floating-point formats, one of which (H-format) used
128 bits. The small-VAX models I used implemented H-format
with trap-and-emulate, but it was part of the instruction
architecture nonetheless and in that sense a "native" form.

--
Er*********@sun.com

Mar 27 '06 #64
Eric Sosman <Er*********@sun.com> writes:
[...]
Doesn't matter anyway; my point is still true that all C compilers
to date (at least those I'm aware of) support two standard integer
types of identical size. Three out of four or four out of five, either
way, there appears to always be a redundant type.


Isn't the Alpha under OSF/1 (already mentioned) a
counterexample? It's "four out of four" (or "three out
of three" if you count un-char-itably). If you want to
look from the other angle, it has no "redundant" type.


Alpha OSF/1 has the following:

char 8
short 16
int 32
long 64
long long 64

It has no redundant type only if you ignore C99.

In any case, redundant types aren't necessarily a bad thing. The
standard guarantees a minimum range for each type, and requires a
reasonably large set of types to be mapped onto the native types of
the underlying system. Having some types overlap is better than
leaving gaps.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 27 '06 #65
On 2006-03-27, Eric Sosman <Er*********@sun.com> wrote:


Jordan Abel wrote On 03/27/06 15:07,:

a 128-bit word size might make sense, though, for a specialized system
that is intended to mainly work with high-precision floating point,
though. [...]


Not a mere theoretical possibility: DEC VAX supported
four floating-point formats, one of which (H-format) used
128 bits. The small-VAX models I used implemented H-format
with trap-and-emulate, but it was part of the instruction
architecture nonetheless and in that sense a "native" form.


I'm talking about a hypothetical machine that used 128 bits for
everything, as some allegedly now use 32 bits for everything.
Mar 27 '06 #66
On 2006-03-27, Keith Thompson <ks***@mib.org> wrote:
Jordan Abel <ra*******@gmail.com> writes:
On 2006-03-27, Eric Sosman <Er*********@sun.com> wrote:
When I spotted the mismatch between your "four standard int types" and
the six columns in the table, I quickly excluded "word" but then
guessed you'd forgotten to count `long long'. It never occurred to me
that you'd, er, recharacterize `char' as a non-integer -- and it seems
a bizarre stance for a C programmer to take.)
The keyword "int" is not allowed as part of its type name, therefore it
is arguable that it is not an "int type" despite being an "integer
type".


The language makes no such distinction.


We have short ints, long ints, and no char ints. that's a language
distinction if there ever was one. "int type" isn't really a term
defined by the language anyway, and arguably one plausible definition is
"types declared using the keyword 'int'".
Type char is an integer type;
the only thing that's really special about it is that plain char may
be either signed or unsigned.

Mar 27 '06 #67
Jordan Abel <ra*******@gmail.com> writes:
On 2006-03-27, Keith Thompson <ks***@mib.org> wrote:
Jordan Abel <ra*******@gmail.com> writes:
On 2006-03-27, Eric Sosman <Er*********@sun.com> wrote:
When I spotted the mismatch between your "four standard int types" and
the six columns in the table, I quickly excluded "word" but then
guessed you'd forgotten to count `long long'. It never occurred to me
that you'd, er, recharacterize `char' as a non-integer -- and it seems
a bizarre stance for a C programmer to take.)

The keyword "int" is not allowed as part of its type name, therefore it
is arguable that it is not an "int type" despite being an "integer
type".


The language makes no such distinction.


We have short ints, long ints, and no char ints. that's a language
distinction if there ever was one. "int type" isn't really a term
defined by the language anyway, and arguably one plausible definition is
"types declared using the keyword 'int'".


We also have "short", "unsigned short", "unsigned", "long", "unsigned
long", etc.

If I wanted to define the term "int type", I suppose "any type that
*can* be declared using the keyword 'int'" might be a plausible
definition. However, the standard doesn't define such a term (any
more than it groups long, unsigned long, long long, unsigned long
long, and long double as "long types").

I see absolutely no point either in defining such a term or in
continuing this discussion.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 27 '06 #68
On 27 Mar 2006 08:44 -0800, "David R Tribble" wrote:
Wojtek Lerch wrote:
BTW Think about
typedef long long long long;
;-)

Stephen Sprunk wrote:
That "long long" even exists is a travesty.

What are we going to do when 128-bit ints become common in another couple
decades? Call them "long long long"? Or if we redefine "long long" to be
128-bit ints and "long" to be 64-bit ints, will a 32-bit int be a "short
long" or a "long short"? Maybe 32-bit ints will become "short" and 16-bit
ints will be a "long char" or "short short"? Or is a "short short" already
equal to a "char"?

All we need are "int float" and "double int" and the entire C type system
will be perfect! </sarcasm>


It's interesting to note that most implementations (all of them I've
ever seen, in fact) only provide three of the four standard int type
sizes, with two of the four being the same size. For example,
consider the following typical choices of type sizes for various
CPU word sizes:

word | char | short| int | long | long long
-----+------+------+------+------+----------


data structure and its size effect heavy for portability (because when
i has the data of the same size & ^ | etc for them should be all well
definite the same) so all problem on portability disappear

so to use char, int, short, long, etc is an error if someone sees
portability for a program.
they had been int8, int16, int32, etc (until char int8) from the day 1
uns8, uns16 uns32 etc
the problem could be that different cpu has different 'main' word size
and this effect in efficience
Mar 28 '06 #69
Stephen Sprunk wrote:
That "long long" even exists is a travesty.
Hardly. The need for something along those lines was so pressing
that different compiler vendors had invented a variety of solutions
already, including some using "long long".
What are we going to do when 128-bit ints become common in another couple
decades?


Use int_least128_t if you need a standard name for a signed int
with width at least 128 bits. If you don't know what that is,
here's an opportunity to learn.
Mar 28 '06 #70
Keith Thompson wrote:
Mathematically, they're called "Gaussian integers".


And like most specialized types there isn't strong reason
to build them into the language (as opposed to letting the
programmer use a library for them). Probably floating-
complex should have been in that category, were it not for
established Fortran practice.
Mar 28 '06 #71
jacob navia wrote:
lcc-win32 supports 128 bit integers. The type is named:
int128


We hope you defined the appropriate stuff in <stdint.h>
and <inttypes.h>, since that is what portable programs
will have to use instead of implementation-specific names.

Note also that you have made lcc-win32 non standards
conformant. You should have used an identifier reserved
for use by the C implementation, not one that is
guaranteed to be available for the application.
Mar 28 '06 #72
Douglas A. Gwyn wrote:
Stephen Sprunk wrote:
That "long long" even exists is a travesty.


Hardly. The need for something along those lines was so pressing
that different compiler vendors had invented a variety of solutions
already, including some using "long long".


It's not "something along those lines" which was a travesty. A
size-named type like the ones that were introduced in C99 would have
been much better. It's specifically the choice of "long long" for the
type name that made it so objectionable.

Mar 28 '06 #73
Eric Sosman wrote:
When I spotted the mismatch between your "four standard int types" and
the six columns in the table, I quickly excluded "word" but then
guessed you'd forgotten to count `long long'. It never occurred to me
that you'd, er, recharacterize `char' as a non-integer -- and it seems
a bizarre stance for a C programmer to take.)


Jordan Abel writes:
The keyword "int" is not allowed as part of its type name, therefore it
is arguable that it is not an "int type" despite being an "integer
type".


Keith Thompson wrote:
The language makes no such distinction.


Jordan Abel writes:
We have short ints, long ints, and no char ints. that's a language
distinction if there ever was one. "int type" isn't really a term
defined by the language anyway, and arguably one plausible definition is
"types declared using the keyword 'int'".


Keith Thompson wrote: We also have "short", "unsigned short", "unsigned", "long", "unsigned
long", etc.

If I wanted to define the term "int type", I suppose "any type that
*can* be declared using the keyword 'int'" might be a plausible
definition. However, the standard doesn't define such a term (any
more than it groups long, unsigned long, long long, unsigned long
long, and long double as "long types").

I see absolutely no point either in defining such a term or in
continuing this discussion.


Sorry for the confusion.

But like I said, it doesn't change my point, that all C compilers I've
ever seen have a redundant integer type size.

By itself, this is not necessarily a bad thing, but it does make
writing portable code a headache sometimes. I'm still waiting for
a standard macro that tells me about endianness (but that's
a topic for another thread).

-drt

Mar 28 '06 #74
Stephen Sprunk wrote:
That "long long" even exists is a travesty.


Douglas A. Gwyn wrote:
Hardly. The need for something along those lines was so pressing
that different compiler vendors had invented a variety of solutions
already, including some using "long long".


Kuyper wrote: It's not "something along those lines" which was a travesty. A
size-named type like the ones that were introduced in C99 would have
been much better. It's specifically the choice of "long long" for the
type name that made it so objectionable.


Type names like 'long long' have the advantage of being decoupled
from the exact word size of the underlying CPU. That's why you
can write reasonably portable code for machines that don't have
nice multiple-of-8 word sizes.

Some programmers may prefer using 'int_least64_t' over 'long long'.
But I don't.

-drt

Mar 28 '06 #75
ku****@wizard.net writes:
Douglas A. Gwyn wrote:
Stephen Sprunk wrote:
> That "long long" even exists is a travesty.


Hardly. The need for something along those lines was so pressing
that different compiler vendors had invented a variety of solutions
already, including some using "long long".


It's not "something along those lines" which was a travesty. A
size-named type like the ones that were introduced in C99 would have
been much better. It's specifically the choice of "long long" for the
type name that made it so objectionable.


None of the predefined integer types (char, short, int, long, long
long) have names that specify their actual sizes, allowing the sizes
to vary across platforms. Only minimum sizes are specified. This
encourages code that doesn't assume specific sizes (though there's
still plenty of code that assumes "all the world's a VAX", or these
days, "all the world's an x86". Introducing a new fundamental type
with a size-specific name would break that pattern, and could break
systems that don't have power-of-two sizes (vanishingly rare these
days, but the standard still allows for them).

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Mar 28 '06 #76
"David R Tribble" <da***@tribble.com> wrote in message
news:11**********************@t31g2000cwb.googlegr oups.com...
I'm still waiting for
a standard macro that tells me about endianness (but that's
a topic for another thread).


One macro, or one per integer type? C doesn't disallow systems where some
types are big endian and some little endian.

C doesn't even disallow "mixed endian" -- any permutation of bits is OK.
Would you just classify those as "other", or do you have something more
complicated in mind? Or would you just ban them?

And what about padding bits -- how useful is it to know the endianness of a
type if you don't know where its padding bits are?
Mar 28 '06 #77
ku****@wizard.net wrote:
... It's specifically the choice of "long long" for the
type name that made it so objectionable.


Why is that objectionable? It avoided using up another
identifier for a new keyword, did not embed some assumed
size in its name (unlike several extensions), and
matched the choice of some of the existing extensions.
Mar 28 '06 #78
David R Tribble wrote:
I'm still waiting for a standard macro that tells me about endianness (but that's
a topic for another thread).


Wojtek Lerch wrote: One macro, or one per integer type? C doesn't disallow systems where some
types are big endian and some little endian.

C doesn't even disallow "mixed endian" -- any permutation of bits is OK.
Would you just classify those as "other", or do you have something more
complicated in mind? Or would you just ban them?

And what about padding bits -- how useful is it to know the endianness of a
type if you don't know where its padding bits are?


Something along the lines of:
http://david.tribble.com/text/c9xmach.txt

This was written in 1995, before 'long long' existed, so I'd have
to add a few more macros, including:

#define _ORD_LONG_HL n

My suggestion is just one of hundreds of ways to describe
endianness, bits sizes, alignment, padding, etc., that have been
invented over time. None of which ever made it into ISO C.

-drt

Mar 29 '06 #79

Keith Thompson wrote:
ku****@wizard.net writes:
Douglas A. Gwyn wrote:
Stephen Sprunk wrote:
> That "long long" even exists is a travesty.

Hardly. The need for something along those lines was so pressing
that different compiler vendors had invented a variety of solutions
already, including some using "long long".
It's not "something along those lines" which was a travesty. A
size-named type like the ones that were introduced in C99 would have
been much better. It's specifically the choice of "long long" for the
type name that made it so objectionable.


None of the predefined integer types (char, short, int, long, long
long) have names that specify their actual sizes, allowing the sizes
to vary across platforms. Only minimum sizes are specified.


In other words, the built-in types were roughly equivalent to
int_leastN_t or int_fastN_t. I definitely approve of types that are
allowed to have different sizes on different platforms. I think that
they are, by far, the most appropriate types to use in most contexts.

However, while using english adjectives as keywords to specify the
minimm size seemed reasonable when the number of different sizes was
small, it has become steadily less reasonable as the number of
different sizes has increased. The new size-named types provide a more
scalable solution to identifying the minimum size. Were backward
compatibility not an issue, I'd recommend abolishing the original type
names in favor of size-named types. I wouldn't recommend the current
naming scheme for the new types, however - intN_t should have been used
for the fast types, with int_exactN_t being reserved for the
exact-sized types.
This
encourages code that doesn't assume specific sizes (though there's
The same benefit accrues to the non-exact-sized size-named types.
days, "all the world's an x86". Introducing a new fundamental type
with a size-specific name would break that pattern, and could break
systems that don't have power-of-two sizes (vanishingly rare these
days, but the standard still allows for them).


You're assuming that the size-specific name would identify an
exact-sized type rather than a minimum-sized type. I would not approve
of that solution any more than you would, for precisely the reasons you
give.

Mar 29 '06 #80
David R Tribble wrote:
....
Type names like 'long long' have the advantage of being decoupled
from the exact word size of the underlying CPU. That's why you
can write reasonably portable code for machines that don't have
nice multiple-of-8 word sizes.
int_least64_t shares that same characteristic in a more scalable
fashion.
Some programmers may prefer using 'int_least64_t' over 'long long'.
But I don't.


I would prefer using int64 over either of those alternatives, with
int64 being given the same meaning currently attached to int_least64_t.

Mar 29 '06 #81
Douglas A. Gwyn wrote:
ku****@wizard.net wrote:
... It's specifically the choice of "long long" for the
type name that made it so objectionable.
Why is that objectionable?


Because it's the wrong solution, and adopting it into the standard
creates justification for anticipating (hopefully incorrectly) that
this wrong solution is the way that future versions of the standard
will handle new type sizes.
... It avoided using up another
identifier for a new keyword, did not embed some assumed
size in its name (unlike several extensions),
You consider that an advantage. I think it's a disadvantage to have a
type whose miniminum required size corresponds to 64 bits, but giving
it a name which does not make that fact explicit.

Also, I've heard it criticised because of the fact that it's form makes
it something unique in the standard: a doubled keyword that is neither
a syntax error nor eqivalent to the corresponding un-doubled keyword. I
don't know much about the internals of compiler design, but I've seen
comments on someone who thought he did, who claimed that this
distinction unnecessarily imposed an (admittedly small) additional
level of complexity on the parser.
and
matched the choice of some of the existing extensions.


I recognise the practical necesity of taking into consideration
existing practice. My criticism of 'long long' was aimed primarily at
those who created it in the first place as an extension to existing
implementations.

Mar 29 '06 #82
"Douglas A. Gwyn" <DA****@null.net> wrote:
jacob navia wrote:
lcc-win32 supports 128 bit integers. The type is named:
int128


We hope you defined the appropriate stuff in <stdint.h>
and <inttypes.h>, since that is what portable programs
will have to use instead of implementation-specific names.

Note also that you have made lcc-win32 non standards conformant.

^
even more
HTH; HAND.

Richard
Mar 29 '06 #83
ku****@wizard.net wrote:

[ about "long long": ]
Also, I've heard it criticised because of the fact that it's form makes
it something unique in the standard: a doubled keyword that is neither
a syntax error nor eqivalent to the corresponding un-doubled keyword. I
don't know much about the internals of compiler design, but I've seen
comments on someone who thought he did, who claimed that this
distinction unnecessarily imposed an (admittedly small) additional
level of complexity on the parser.


More so than "long int", "signed int", and "unsigned short int" already
did? If so, I can't help thinking that the difference must have been
truly slight.

Richard
Mar 29 '06 #84
Douglas A. Gwyn a écrit :
jacob navia wrote:
lcc-win32 supports 128 bit integers. The type is named:
int128

We hope you defined the appropriate stuff in <stdint.h>
and <inttypes.h>, since that is what portable programs
will have to use instead of implementation-specific names.

Note also that you have made lcc-win32 non standards
conformant. You should have used an identifier reserved
for use by the C implementation, not one that is
guaranteed to be available for the application.


The use of int128 is only there IF you

#include <int128.h>

Otherwise you can use the identifier int128 as you want.

jacob
Mar 29 '06 #85
On 2006-03-29, jacob navia <ja***@jacob.remcomp.fr> wrote:
Douglas A. Gwyn a écrit :
jacob navia wrote:
lcc-win32 supports 128 bit integers. The type is named:
int128

We hope you defined the appropriate stuff in <stdint.h>
and <inttypes.h>, since that is what portable programs
will have to use instead of implementation-specific names.

Note also that you have made lcc-win32 non standards
conformant. You should have used an identifier reserved
for use by the C implementation, not one that is
guaranteed to be available for the application.


The use of int128 is only there IF you

#include <int128.h>

Otherwise you can use the identifier int128 as you want.

jacob


In that case, why not use stdint.h and int128_t, int_least128_t, and
int_fast128_t?
Mar 29 '06 #86
On 2006-03-29, Richard Bos <rl*@hoekstra-uitgeverij.nl> wrote:
ku****@wizard.net wrote:

[ about "long long": ]
Also, I've heard it criticised because of the fact that it's form makes
it something unique in the standard: a doubled keyword that is neither
a syntax error nor eqivalent to the corresponding un-doubled keyword. I
don't know much about the internals of compiler design, but I've seen
comments on someone who thought he did, who claimed that this
distinction unnecessarily imposed an (admittedly small) additional
level of complexity on the parser.


More so than "long int", "signed int", and "unsigned short int" already
did? If so, I can't help thinking that the difference must have been
truly slight.


Those still have only one of each keyword present. It's not like "long
long" acts as a 'pseudo-keyword' - "long int signed long" is a valid
name for the type.
Mar 29 '06 #87
Jordan Abel wrote:
On 2006-03-29, jacob navia <ja***@jacob.remcomp.fr> wrote:
Douglas A. Gwyn a écrit :
jacob navia wrote:
lcc-win32 supports 128 bit integers. The type is named:
int128
We hope you defined the appropriate stuff in <stdint.h>
and <inttypes.h>, since that is what portable programs
will have to use instead of implementation-specific names.

Note also that you have made lcc-win32 non standards
conformant. You should have used an identifier reserved
for use by the C implementation, not one that is
guaranteed to be available for the application.


The use of int128 is only there IF you

#include <int128.h>

Otherwise you can use the identifier int128 as you want.

jacob

In that case, why not use stdint.h and int128_t, int_least128_t, and
int_fast128_t?


Because that would force ALL users of stdint.h to accept int128_t and
all the associated machinery, what is probably not what all of them want.

But the name int128 is not "cast in stone" and since I suppose the names
intXXX_t are reserved I could use those.

Basically this type is implemented using lcc-win32 specific extensions
like operator overloading, what allows to easily define new types. This
extensions are disabled when you invoke the compiler under the "no
extensions" mode. If I would put the 128 bit integers in the stdint
header, the operator overloading required would not work under the "ansi
c" environment, and problems would appear. That is why I use a special
header that will be used only by people the want those integer types.

Of course there is a strict ANSI C interface for 128 bit integers, but
if you use it, you would have to write

int128 a,b,c;
...
c = i128add(a,b);

instead of

c = a+b;
Mar 29 '06 #88
jacob navia schrieb:
Douglas A. Gwyn a écrit :
jacob navia wrote:
lcc-win32 supports 128 bit integers. The type is named:
int128


We hope you defined the appropriate stuff in <stdint.h>
and <inttypes.h>, since that is what portable programs
will have to use instead of implementation-specific names.

Note also that you have made lcc-win32 non standards
conformant. You should have used an identifier reserved
for use by the C implementation, not one that is
guaranteed to be available for the application.


The use of int128 is only there IF you

#include <int128.h>

Otherwise you can use the identifier int128 as you want.


This is IMO no good solution; if someone defined
typedef struct {
....
} int128;
in a library and one of your users wants to use this
library and another one which uses the
implementation-provided 128 bit exact width signed
integer type, he or she runs into a rather unnecessary
problem.
IMO, providing appropriate definitions in the
appropriate headers is better.

FWIW: I have seen enough "int64" and "Int64" structure
typedefs to assume that there may be the same for
128 bits.

Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Mar 29 '06 #89
"David R Tribble" <da***@tribble.com> wrote in message
news:11*********************@i39g2000cwa.googlegro ups.com...
David R Tribble wrote:
I'm still waiting for a standard macro that tells me about endianness
(but that's
a topic for another thread).

Wojtek Lerch wrote:
One macro, or one per integer type? C doesn't disallow systems where
some
types are big endian and some little endian.

C doesn't even disallow "mixed endian" -- any permutation of bits is OK.
Would you just classify those as "other", or do you have something more
complicated in mind? Or would you just ban them?

And what about padding bits -- how useful is it to know the endianness of
a
type if you don't know where its padding bits are?


Something along the lines of:
http://david.tribble.com/text/c9xmach.txt


I have to say that I find it rather vague and simplistic, and can't find
where it answers my questions. I have absolutely no clue how you wanted to
handle implementations that are neither clearly little-endian nor clearly
big-endian. You didn't propose to ban them, did you?

/* Bit/byte/word order */

#define _ORD_BIG 0 /* Big-endian */
#define _ORD_LITTLE 1 /* Little-endian */

#define _ORD_BITF_HL 0 /* Bitfield fill order */
#define _ORD_BYTE_HL 0 /* Byte order within shorts */
#define _ORD_WORD_HL 0 /* Word order within longs */

What about implementations with one-byte shorts? What if the bit order
within a short doesn't match the bit order in a char? What if the byte
order within a two-byte short doesn't match the byte order within a half of
a four-byte long? What about the halves of an int? What about
implementations with three-byte longs? What if the most significant bits
sit in the middle byte? Or if the three most significant bits are mapped to
the least significant bit of the three bytes?
This was written in 1995, before 'long long' existed, so I'd have
to add a few more macros, including:

#define _ORD_LONG_HL n

My suggestion is just one of hundreds of ways to describe
endianness, bits sizes, alignment, padding, etc., that have been
invented over time. None of which ever made it into ISO C.


Perhaps because they all made the incorrect assumption that in every
conforming implementation, every integer type must necessarily be either
little endian or big endian?

Personally, I think it would be both easier and more useful not to try to
classify all types on all implementations, but instead to define names for
big- and little-endian types and make them all optional. For instance:

uint_be32_t -- a 32-bit unsigned type with no padding bits and a
big-endian representation, if such a type exists.

The representation is big-endian if:

* for any two value bits located in different bytes, the bit whose byte
has a lower address represents a higher value
* for any two value bits located in the same byte, the order of their
represented values matches the order of the values they represent in
unsigned char
Mar 29 '06 #90
On 2006-03-29, jacob navia <ja***@jacob.remcomp.fr> wrote:
Jordan Abel wrote:
On 2006-03-29, jacob navia <ja***@jacob.remcomp.fr> wrote:
Douglas A. Gwyn a écrit :

jacob navia wrote:
>lcc-win32 supports 128 bit integers. The type is named:
>int128
We hope you defined the appropriate stuff in <stdint.h>
and <inttypes.h>, since that is what portable programs
will have to use instead of implementation-specific names.

Note also that you have made lcc-win32 non standards
conformant. You should have used an identifier reserved
for use by the C implementation, not one that is
guaranteed to be available for the application.

The use of int128 is only there IF you

#include <int128.h>

Otherwise you can use the identifier int128 as you want.

jacob

In that case, why not use stdint.h and int128_t, int_least128_t, and
int_fast128_t?


Because that would force ALL users of stdint.h to accept int128_t and
all the associated machinery, what is probably not what all of them want.


Why? What machinery is associated with int128_t that c99 doesn't
_already_ say is permitted in stdint.h?

You'd have
[u]int128_t, [u]int_least128_t, [u]int_fast128_t, etc typedefs,
INT128_MIN, INT128_MAX, UINT128_MAX, and the associated LEAST and FAST
ones as well, INT128_C(x) and UINT128_C(x) in stdint.h

{PRI,SCN}[diouxX]{FAST,LEAST,}128 in inttypes.h

what else do you need?
But the name int128 is not "cast in stone" and since I suppose the names
intXXX_t are reserved I could use those.

Basically this type is implemented using lcc-win32 specific extensions
like operator overloading, what allows to easily define new types. This
extensions are disabled when you invoke the compiler under the "no
extensions" mode. If I would put the 128 bit integers in the stdint
header, the operator overloading required would not work under the "ansi
c" environment, and problems would appear.
Why not implement it as a standard type so that it can _always_ be used,
with nothing but an #ifdef INT128_MAX to check if it's present?
That is why I use a special header that will be used only by people
the want those integer types.

Of course there is a strict ANSI C interface for 128 bit integers, but
if you use it, you would have to write

int128 a,b,c;
...
c = i128add(a,b);

instead of

c = a+b;


why? why not implement it as a standard type, with the compiler knowing
about it?

#ifdef INT_LEAST128_MAX
int_least128_t a,b,c;
c = a+b;
#else
#error No 128-bit integer type available
#endif
Mar 29 '06 #91
jacob navia wrote:
Jordan Abel wrote:
In that case, why not use stdint.h and int128_t, int_least128_t, and
int_fast128_t? Because that would force ALL users of stdint.h to accept int128_t and
all the associated machinery, what is probably not what all of them want.


If the programs don't try to use the type then the extra definitions
are of no consequence.
... If I would put the 128 bit integers in the stdint
header, the operator overloading required would not work under the "ansi
c" environment, and problems would appear. That is why I use a special
header that will be used only by people the want those integer types.


You ought to rethink your design. If your compiler knows the
type as __int128 (for example) then <stdint.h> need only refer
to that name. You may have to define a testable macro for
your extended environment in order for the standard header to
know whether that type is supported or not, but that kind of
thing is quite common in implementations already.
Mar 30 '06 #92
ku****@wizard.net wrote [re "long long"]:
You consider that an advantage. I think it's a disadvantage to have a
type whose miniminum required size corresponds to 64 bits, but giving
it a name which does not make that fact explicit.
Then you should use <stdint.h>, which was introduced at the
same time. None of the "keyword" types has ever had a
specific size embedded in its name.
Also, I've heard it criticised because of the fact that it's form makes
it something unique in the standard: a doubled keyword that is neither
a syntax error nor eqivalent to the corresponding un-doubled keyword. I
don't know much about the internals of compiler design, but I've seen
comments on someone who thought he did, who claimed that this
distinction unnecessarily imposed an (admittedly small) additional
level of complexity on the parser.


If a parser generator is used (e.g. yacc) there is no significant
problem. If a hand-coded parser is used, it's nearly trivial to
handle. (Look ahead one token, for example. In Ritchie's PDP-11
C compiler a "long" counter was incremented, and there was no
diagnostic for multiple "longs". It is trivial to test for a
count of 1, 2, or many and do the right thing for each case.)
Mar 30 '06 #93
Douglas A. Gwyn wrote:
ku****@wizard.net wrote [re "long long"]:
You consider that an advantage. I think it's a disadvantage to have a
type whose miniminum required size corresponds to 64 bits, but giving
it a name which does not make that fact explicit.
Then you should use <stdint.h>, which was introduced at the
same time.


I plan to, should our client ever give us permission to use anything
more advanced than C94. However, I wasn't complaining about the absence
of those types - I know they exists. I was objecting to the presence of
"long long", and in particular to it's presence in some pre-C99
implementations. It's that presence which forced the C committee to
accept "long long" in the same revision as the preferred alternatives.
... None of the "keyword" types has ever had a
specific size embedded in its name.


And, in retrospect, I don't approve of that fact.

Mar 30 '06 #94
David R Tribble wrote:
I'm still waiting for a standard macro that tells me about endianness
(but that's a topic for another thread).


Wojtek Lerch wrote:
One macro, or one per integer type? C doesn't disallow systems where
some types are big endian and some little endian.

C doesn't even disallow "mixed endian" -- any permutation of bits is OK.
Would you just classify those as "other", or do you have something more
complicated in mind? Or would you just ban them?


David R Tribble wrote:
Something along the lines of:
http://david.tribble.com/text/c9xmach.txt


Wojtek Lerch wrote: I have to say that I find it rather vague and simplistic, and can't find
where it answers my questions. I have absolutely no clue how you wanted to
handle implementations that are neither clearly little-endian nor clearly
big-endian. You didn't propose to ban them, did you?
No, that's why there are three endianness macros. This allows for,
say, the PDP-11 mixed-endian 'long int' type:

#define _ORD_BIG 0 /* Big-endian */
#define _ORD_LITTLE 0 /* Little-endian */

#define _ORD_BITF_HL 0 /* Bitfield fill order */
#define _ORD_BYTE_HL 0 /* Byte order within shorts */
#define _ORD_WORD_HL 1 /* Word order within longs */

What about implementations with one-byte shorts?
Obviously the macro names could be better.

What if the bit order within a short doesn't match the bit order in a char?
What if the byte order within a two-byte short doesn't match the byte order
within a half of a four-byte long? What about the halves of an int? What about
implementations with three-byte longs? What if the most significant bits
sit in the middle byte? Or if the three most significant bits are mapped to
the least significant bit of the three bytes?
Then we need more macros with better names.
You're not saying that this is an unsolvable problem, are you?

Perhaps because they all made the incorrect assumption that in every
conforming implementation, every integer type must necessarily be either
little endian or big endian?
I didn't make that assumption.

Personally, I think it would be both easier and more useful not to try to
classify all types on all implementations, but instead to define names for
big- and little-endian types and make them all optional. For instance:
uint_be32_t -- a 32-bit unsigned type with no padding bits and a
big-endian representation, if such a type exists.


How do you tell if those types are not implemented?

More to the point, how do you tell portably what byte order plain
'int' is implemented with?

-drt

Mar 30 '06 #95
Douglas A. Gwyn wrote:
... None of the "keyword" types has ever had a
specific size embedded in its name.


Kuyper wrote: And, in retrospect, I don't approve of that fact.


Then you probably don't approve of Java, Perl, awk, ksh, FORTRAN,
BASIC, etc., or most other programming languages, either.

-drt

Mar 30 '06 #96
"David R Tribble" <da***@tribble.com> wrote in message
news:11**********************@g10g2000cwb.googlegr oups.com...
David R Tribble wrote:
Something along the lines of:
http://david.tribble.com/text/c9xmach.txt


Wojtek Lerch wrote:
What if the bit order within a short doesn't match the bit order in a
char?
What if the byte order within a two-byte short doesn't match the byte
order
within a half of a four-byte long? What about the halves of an int? What
about
implementations with three-byte longs? What if the most significant bits
sit in the middle byte? Or if the three most significant bits are mapped
to
the least significant bit of the three bytes?


Then we need more macros with better names.
You're not saying that this is an unsolvable problem, are you?


Pretty much, depending on what exactly you call the problem and what kind of
a solution you find acceptable.

Let's concentrate on implementations that have 16-bit short types with no
padding bits. There are 20922789888000 possible permutations of 16 bits,
and the C standard doesn't disallow any of them. Even though it's
theoretically possible to come up with a system of macros allowing programs
to distinguish all the permutations, I don't think it would be very useful
or practical. For all practical purposes, a distinction between big endian,
little endian, and "other" is sufficient. There are no existing "other"
implementations anyway.

In practice, a simple one-bit solution like your is perfectly fine.
Unfortunately, it only covers practical implementations; therefore, it
wouldn't be acceptable as a part of the standard.
Perhaps because they all made the incorrect assumption that in every
conforming implementation, every integer type must necessarily be either
little endian or big endian?


I didn't make that assumption.


Correct me if I'm wrong, but you did seem to make the assumption that there
are only two possible byte orders within a short, and that there are only
two possible "word orders" within a long, and that knowing those two bits of
information (along with the common stuff from <limits.h>) gives you complete
or at least useful knowledge about the bit order of all integer types (in
C89).

If I indeed misunderstood something, could you explain how you would use
your macros in a program to distinguish between implementations where an
unsigned short occupies two 9-bit bytes, has two padding bits, and
represents the value 0x1234 as

(a) 0x12, 0x34 ("big endian", with a padding bit at the top of each byte)
(b) 0x24, 0x68 ("big endian", with a padding bit at the bottom of each
byte)
(b) 0x22, 0x64 ("big endian", with a padding bit in the middle of each
byte)
(c) 0x34, 0x12 ("little endian", padding at the top)
(d) 0x68, 0x24 ("little endian", padding at the bottom)
(e) 0x23, 0x14 ("middle endian", with the middle bits in the first byte, a
padding bit at the top of each byte)
Personally, I think it would be both easier and more useful not to try to
classify all types on all implementations, but instead to define names
for
big- and little-endian types and make them all optional. For instance:
uint_be32_t -- a 32-bit unsigned type with no padding bits and a
big-endian representation, if such a type exists.


How do you tell if those types are not implemented?


The same way as any other type from <stdint.h> -- #if
defined(UINT_BE32_MAX).
More to the point, how do you tell portably what byte order plain
'int' is implemented with?


You don't. It doesn't make sense to talk about the "byte order" without
assuming that the value bits are grouped into bytes according to their
value; and that assumption is not portable. At least not in theory.

Using your method, how do you tell where the padding bits are located? If
you can't, how useful is it to know the "byte order"?

Mar 30 '06 #97


David R Tribble wrote On 03/30/06 16:55,:
[...]

Wojtek Lerch wrote:
I have to say that I find it rather vague and simplistic, and can't find
where it answers my questions. I have absolutely no clue how you wanted to
handle implementations that are neither clearly little-endian nor clearly
big-endian. You didn't propose to ban them, did you?

No, that's why there are three endianness macros. This allows for,
say, the PDP-11 mixed-endian 'long int' type:

#define _ORD_BIG 0 /* Big-endian */
#define _ORD_LITTLE 0 /* Little-endian */


Does the Standard require that the 1's bit and the
2's bit of an `int' reside in the same byte? Or is the
implementation free to scatter the bits of the "pure
binary" representation among the different bytes as it
pleases? (It must, of course, scatter the corresponding
bits of signed and unsigned versions in the same way.)

If the latter, I think there's the possibility (a
perverse possibility) of a very large number of permitted
"endiannesses," something like

(sizeof(type) * CHAR_BIT) !
-----------------------------
(CHAR_BIT !) ** sizeof(type)

Argument: There are `sizeof(type) * CHAR_BIT' bits (value,
sign, and padding) in the object, so the number of ways to
permute the bits is the factorial of that quantity. But C
cannot detect the arrangement of individual bits within a
byte, so each byte of the object divides the number of
detectably different arrangements by `CHAR_BIT!'.

For an `int' made up of four eight-bit bytes, this
gives 32! / (8! ** 4) ~= 1e17 "endiannesses," or one tenth
of a billion billion.

--
Er*********@sun.com

Mar 30 '06 #98
On 2006-03-30, Wojtek Lerch <Wo******@yahoo.ca> wrote:
(a) 0x12, 0x34 ("big endian", with a padding bit at the top of each
byte)
(b) 0x24, 0x68 ("big endian", with a padding bit at the bottom of each
byte)
(b) 0x22, 0x64 ("big endian", with a padding bit in the middle of each
byte)
(c) 0x34, 0x12 ("little endian", padding at the top)
(d) 0x68, 0x24 ("little endian", padding at the bottom)
(e) 0x23, 0x14 ("middle endian", with the middle bits in the first byte, a
padding bit at the top of each byte)


You forgot 0x09, 0x34, big-endian with the padding bits at the top of
the word, which is, to me, the most obvious of all.
Mar 31 '06 #99
"Jordan Abel" <ra*******@gmail.com> wrote in message
news:sl***********************@random.yi.org...
You forgot 0x09, 0x34, big-endian with the padding bits at the top of
the word, which is, to me, the most obvious of all.


The truth is I didn't think of it because my example originally had 8-bit
bytes and no padding. But as far as my point is concerned, it doesn't
matter which combination is the most obvious one, only that there are
zillions of valid combinations.
Mar 31 '06 #100

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
by: Fao | last post by:
Hi, I am in my first year of C++ in college and my professor wants me to Write a Program with multiple functions,to input two sets of user-defined data types: One type named 'Sign' declared by...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
1
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
1
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...
0
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
0
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.