By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,159 Members | 1,000 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,159 IT Pros & Developers. It's quick & easy.

Pointing to high and low bytes of something

P: n/a
My code contains this declaration:

: typedef union {
: word Word;
: struct {
: byte Low;
: byte High;
: } Bytes;
: } reg;

The colons are not part of the declaration.

Assume that 'word' is always a 16-bit unsigned integral type, and that
'byte' is always an 8-bit unsigned integral type ('unsigned short int'
and 'unsigned char' respectively on my implementation).

My understanding, after browsing through previous threads on this and
other newsgroup, is that given a variable Var of type reg, accessing
Var.Word after having assigned values to Var.Bytes.Low and
Var.Bytes.High or, conversely, accessing Var.Bytes.Low and
Var.Bytes.High after having assigned a value to Var.Word, results in
implementation-defined behavior (or possibly undefined behavior).

If it is indeed implementation-defined behavior, my question is: can
the implementation only take the liberty to choose whether
Var.Bytes.Low or Var.Bytes.High will contain the LSB of Var.Word, and
whether Var.Bytes.High or Var.Bytes.Low will contains the MSB, or can
the implementation take other liberties?

Intuitively, I would say that there is more than this (specifically,
that the compiler can insert padding after the first member of the
Bytes struct), but some articles I've read seemed to imply otherwise.

Anyway, it all comes down to: assume that I am willing to sacrifice
portability by forcing the maintainer to exchange the positions of the
two members of Bytes depending on the implementation; do I then have a
guarantee that Var.Bytes.Low will always evaluate to the LSB of
Var.Word, and that Var.Bytes.High will always evaluate to the MSB of
Var.Word?

If not, then I would gladly accept suggestions on how to change my
code.
Keep in mind that I need to access:
1) Var.Word (or its equivalent after the change) by address
2) Var.Bytes.Low (or its equivalent) by address
3) Var.Bytes.High (or its equivalent) by address
to the effect that this code can be modified in a straight-forward way
to work as intended:

: #include <stdlib.h>
: #include <stdio.h>
:
: int main() {
: reg Var;
: reg *VarWordP;
: reg *VarLSBP;
: reg *VarMSBP;
: VarWordP=&(Var.Word);
: VarLSBP=&(Var.Bytes.Low);
: VarMSBP=&(Var.Bytes.High);
: *VarWordP=0x1234;
: printf("%x %x %x\n", *VarWordP, *VarLSBP, *VarMSBP);
: return 0;
: }

Assume type reg has been defined as above. I should always get
1234 34 12
as the program's output, save any changes that could be needed in the
printf() format specifiers.
by LjL
lj****@tiscali.it
Nov 13 '05 #1
Share this Question
Share on Google+
19 Replies


P: n/a
dl*****@tiscalinet.it (Lorenzo J. Lucchini) wrote:
: typedef union {
: word Word;
: struct {
: byte Low;
: byte High;
: } Bytes;
: } reg; Assume that 'word' is always a 16-bit unsigned integral type, and that
'byte' is always an 8-bit unsigned integral type ('unsigned short int'
and 'unsigned char' respectively on my implementation). If it is indeed implementation-defined behavior, my question is: can
the implementation only take the liberty to choose whether
Var.Bytes.Low or Var.Bytes.High will contain the LSB of Var.Word, and
whether Var.Bytes.High or Var.Bytes.Low will contains the MSB, or can
the implementation take other liberties?

Intuitively, I would say that there is more than this (specifically,
that the compiler can insert padding after the first member of the
Bytes struct), but some articles I've read seemed to imply otherwise.


They can, but I've never seen one that does; probably the scarcity of
implementations that do so is part of the reason for the belief that
they can't.

Richard
Nov 13 '05 #2

P: n/a
lj****@tiscalinet.it (Lorenzo J. Lucchini) wrote:
# My code contains this declaration:
#
# : typedef union {
# : word Word;
# : struct {
# : byte Low;
# : byte High;
# : } Bytes;
# : } reg;

# Anyway, it all comes down to: assume that I am willing to sacrifice
# portability by forcing the maintainer to exchange the positions of the

If you're willing to sacrafice portability, then try this on each machine
you're interested in, and if it works there, you're done. You can also
have a dynamic test in main(), perhaps,
{
reg x; x.Word = 0x1234;
if (x.Bytes.Low==0x34 && x.Bytes.High==0x12)
puts("reg okay");
else if (x.Bytes.Low==0x12 && x.Bytes.High==0x34)
puts("reg byte-swabbed");
else
puts("reg completely confused");
}

--
Derk Gwen http://derkgwen.250free.com/html/index.html
Raining down sulphur is like an endurance trial, man. Genocide is the
most exhausting activity one can engage in. Next to soccer.
Nov 13 '05 #3

P: n/a
In <1f**************************@posting.google.com > lj****@tiscalinet.it (Lorenzo J. Lucchini) writes:
My code contains this declaration:

: typedef union {
: word Word;
: struct {
: byte Low;
: byte High;
: } Bytes;
: } reg;

The colons are not part of the declaration.

Assume that 'word' is always a 16-bit unsigned integral type, and that
'byte' is always an 8-bit unsigned integral type ('unsigned short int'
and 'unsigned char' respectively on my implementation).

My understanding, after browsing through previous threads on this and
other newsgroup, is that given a variable Var of type reg, accessing
Var.Word after having assigned values to Var.Bytes.Low and
Var.Bytes.High or, conversely, accessing Var.Bytes.Low and
Var.Bytes.High after having assigned a value to Var.Word, results in
implementation-defined behavior (or possibly undefined behavior).
Accessing Var.Bytes.High and Var.Bytes.Low (after initialising Var.Word)
will always provide implementation-defined results, with no possibility
of undefined behaviour. But not the other way round.
If it is indeed implementation-defined behavior, my question is: can
the implementation only take the liberty to choose whether
Var.Bytes.Low or Var.Bytes.High will contain the LSB of Var.Word, and
whether Var.Bytes.High or Var.Bytes.Low will contains the MSB, or can
the implementation take other liberties?

Intuitively, I would say that there is more than this (specifically,
that the compiler can insert padding after the first member of the
Bytes struct), but some articles I've read seemed to imply otherwise.
Your intuition is correct: in theory, the compiler *can* do that.
In practice, padding bytes are inserted only when they serve a *good*
purpose. Inserting padding byte(s) between Low and High would be
downright perverse, since, *in the framework of your assumptions*, no
padding bytes are needed at all: you're merely aliasing a two-byte object
by two independent bytes.
Anyway, it all comes down to: assume that I am willing to sacrifice
portability by forcing the maintainer to exchange the positions of the
two members of Bytes depending on the implementation; do I then have a
guarantee that Var.Bytes.Low will always evaluate to the LSB of
Var.Word, and that Var.Bytes.High will always evaluate to the MSB of
Var.Word?
In practice, yes, assuming that your initial assumptions still hold.
If not, then I would gladly accept suggestions on how to change my
code.
Keep in mind that I need to access:
1) Var.Word (or its equivalent after the change) by address
2) Var.Bytes.Low (or its equivalent) by address
3) Var.Bytes.High (or its equivalent) by address
to the effect that this code can be modified in a straight-forward way
to work as intended:

: #include <stdlib.h>
: #include <stdio.h>
:
: int main() {
: reg Var;
: reg *VarWordP;
: reg *VarLSBP;
: reg *VarMSBP;
: VarWordP=&(Var.Word);
: VarLSBP=&(Var.Bytes.Low);
: VarMSBP=&(Var.Bytes.High);
: *VarWordP=0x1234;
: printf("%x %x %x\n", *VarWordP, *VarLSBP, *VarMSBP);
: return 0;
: }

Assume type reg has been defined as above. I should always get
1234 34 12
as the program's output, save any changes that could be needed in the
printf() format specifiers.


You don't need the union at all for this purpose:

word foo, *wp = &foo;
byte *ph, *pl;
pl = (byte *)wp; /* or the other way round, depending on the */
ph = pl + 1; /* implementation */
*wp = 0x1234;
printf("%x %x %x\n", (unsigned)*wp, (unsigned)*lp, (unsigned)*hp);

Now, even the most perverse compiler cannot affect the behaviour of your
code: you're pointing at the two bytes of foo directly, without using
any structs and unions. The only (unavoidable) assumption (apart from the
ones explicitly stated at the beginning of your post) is about which of
the two bytes of a word is the LSB and which the MSB.

Also note the casts in the printf call: %x expects an unsigned value and
there is no guarantee that any of the three values will get promoted to
this type (signed int is far more probable). So, you must provide the
right type explicitly (again, the code will work without the casts as well
in practice, but you have nothing to gain by not doing the right thing).

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #4

P: n/a
Derk Gwen <de******@HotPOP.com> wrote in message news:<vr************@corp.supernews.com>...
lj****@tiscalinet.it (Lorenzo J. Lucchini) wrote:
[using unions to extract MSB and LSB from something]


If you're willing to sacrafice portability, then try this on each machine
you're interested in, and if it works there, you're done. You can also
have a dynamic test in main(), perhaps,
{
reg x; x.Word = 0x1234;
if (x.Bytes.Low==0x34 && x.Bytes.High==0x12)
puts("reg okay");
else if (x.Bytes.Low==0x12 && x.Bytes.High==0x34)
puts("reg byte-swabbed");
else
puts("reg completely confused");
}


I am interested in every machine someone could decide to compile my
code on.
By "sacrificing portability" I simply mean that this hypothetical
person should go through the hassle of checking whether his or her
machine is little-endian or big-endian, and uncomment the relevant
#define.
What I would *not* like to get on any machine is the "reg completely
confused".
See the reply I'm just about to write to Dan Pop's article for further
questions about if and when the "confused" part can occur.

by LjL
lj****@tiscali.it
Nov 13 '05 #5

P: n/a
"Dan Pop" <Da*****@cern.ch> wrote:
You don't need the union at all for this purpose:

word foo, *wp = &foo;
byte *ph, *pl;
pl = (byte *)wp; /* or the other way round, depending on the */
ph = pl + 1; /* implementation */
*wp = 0x1234;
printf("%x %x %x\n", (unsigned)*wp, (unsigned)*lp, (unsigned)*hp);
ITYM pl ph
Now, even the most perverse compiler cannot affect the behaviour of your
code: you're pointing at the two bytes of foo directly, without using
any structs and unions. The only (unavoidable) assumption (apart from the
ones explicitly stated at the beginning of your post) is about which of
the two bytes of a word is the LSB and which the MSB.


You are pointing at two bytes of foo directly, yes. There are no
guarantees that what you get out will be either 0x12 or 0x34 for
either of the byte results. The object representation of an
unsigned integer type (apart from unsigned char) is not specified
to the level where you must be able to identify definite MSB and
LSB where the value is given as (MSB << n) + LSB -- the bits could
be organised in a different order, and there could be padding bits.

--
Simon.
Nov 13 '05 #6

P: n/a
Da*****@cern.ch (Dan Pop) wrote in message news:<bp**********@sunnews.cern.ch>...
In <1f**************************@posting.google.com > lj****@tiscalinet.it (Lorenzo J. Lucchini) writes:
My code contains this declaration:

[unions to extract MSB and LSB from something]

My understanding, after browsing through previous threads on this and
other newsgroup, is that given a variable Var of type reg, accessing
Var.Word after having assigned values to Var.Bytes.Low and
Var.Bytes.High or, conversely, accessing Var.Bytes.Low and
Var.Bytes.High after having assigned a value to Var.Word, results in
implementation-defined behavior (or possibly undefined behavior).
Accessing Var.Bytes.High and Var.Bytes.Low (after initialising Var.Word)
will always provide implementation-defined results, with no possibility
of undefined behaviour. But not the other way round.


What do you mean with "not the other way round"? That accessing
Var.Word after initializing Var.Bytes.High and Var.Bytes.Low could
result in *undefined* behavior? If so, then I'm already invoking nasal
demons.
[snip]

Intuitively, I would say that there is more than this (specifically,
that the compiler can insert padding after the first member of the
Bytes struct), but some articles I've read seemed to imply otherwise.


Your intuition is correct: in theory, the compiler *can* do that.
In practice, padding bytes are inserted only when they serve a *good*
purpose. Inserting padding byte(s) between Low and High would be
downright perverse, since, *in the framework of your assumptions*, no
padding bytes are needed at all: you're merely aliasing a two-byte object
by two independent bytes.


Couldn't an architecture require that my 'byte's be aligned on word
boundaries?
Then two of my 'byte's could have take two machine words, while one of
my 'word' would take only one machine word (assuming the machine word
is 16 bit).
Am I missing something the standard requires here?

(Note that, while I'm calling my types 'byte' and 'word', they don't
have to correspond to a machine byte and a machine word; they only
need to be 8 bits and 16 bits wide respectively. For the record,
'byte' and 'word' try to mimic machine bytes and machine words of a
Z80)
[Am I guaranteed my approach will work?]


In practice, yes, assuming that your initial assumptions still hold.


My assumptions will hold as soon as someone has changed the #define's
the way I told them to.
I have no problem with people having to change even more #define's to
tell my program which byte is in Var.Bytes.High and which is in
Var.Bytes.Low... but I *would* have a problem if someone could have a
machine where the result won't be right no matter which #define is
uncommented.
[snip]


You don't need the union at all for this purpose:

word foo, *wp = &foo;
byte *ph, *pl;
pl = (byte *)wp; /* or the other way round, depending on the */
ph = pl + 1; /* implementation */
*wp = 0x1234;
printf("%x %x %x\n", (unsigned)*wp, (unsigned)*lp, (unsigned)*hp);

Now, even the most perverse compiler cannot affect the behaviour of your
code: you're pointing at the two bytes of foo directly, without using
any structs and unions. The only (unavoidable) assumption (apart from the
ones explicitly stated at the beginning of your post) is about which of
the two bytes of a word is the LSB and which the MSB.


This looks like an interesting solution - to be sure I'm on the safe
side, if nothing else.
But... assuming the scenario I outlined above (machine with
word-aligned bytes and such) isn't forbidden by the standard, couldn't
it cause problems (or invoke demons) with this formulation, too?
I can see a hint that it shouldn't in the ANSI rationale, but I can't
call it more than a hint (I'm clueless, I'll admit).
Also note the casts in the printf call: %x expects an unsigned value and
there is no guarantee that any of the three values will get promoted to
this type (signed int is far more probable). So, you must provide the
right type explicitly (again, the code will work without the casts as well
in practice, but you have nothing to gain by not doing the right thing).


Thank you for pointing this out; I should remember this more often
than I currently do, since while the code might work, it's nowhere
near nice to see stuff like "ffffff3" where an "f3" was expected.

by LjL
lj****@inwind.it
Nov 13 '05 #7

P: n/a
In <3f***********************@news.optusnet.com.au> "Simon Biber" <ne**@ralminNOSPAM.cc> writes:
"Dan Pop" <Da*****@cern.ch> wrote:
You don't need the union at all for this purpose:

word foo, *wp = &foo;
byte *ph, *pl;
pl = (byte *)wp; /* or the other way round, depending on the */
ph = pl + 1; /* implementation */
*wp = 0x1234;
printf("%x %x %x\n", (unsigned)*wp, (unsigned)*lp, (unsigned)*hp);


ITYM pl ph
Now, even the most perverse compiler cannot affect the behaviour of your
code: you're pointing at the two bytes of foo directly, without using
any structs and unions. The only (unavoidable) assumption (apart from the
ones explicitly stated at the beginning of your post) is about which of
the two bytes of a word is the LSB and which the MSB.


You are pointing at two bytes of foo directly, yes. There are no
guarantees that what you get out will be either 0x12 or 0x34 for
either of the byte results. The object representation of an
unsigned integer type (apart from unsigned char) is not specified
to the level where you must be able to identify definite MSB and
LSB where the value is given as (MSB << n) + LSB -- the bits could
be organised in a different order, and there could be padding bits.


The initial set of assumptions excludes the possibility of padding bits:
there is no place for them in a 16-bit unsigned integer type.

Your observation about LSB and MSB is theoretically correct, but C
implementations on 8-bit bytes machines are supposed to use the
underlying hardware bit order, which never assigns the bits randomly.
The only known variation is the byte order, but not the order of bits
inside a byte.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #8

P: n/a
"Simon Biber" <ne**@ralminNOSPAM.cc> wrote in message news:<3f***********************@news.optusnet.com. au>...
"Dan Pop" <Da*****@cern.ch> wrote:
You don't need the union at all for this purpose:

word foo, *wp = &foo;
byte *ph, *pl;
pl = (byte *)wp; /* or the other way round, depending on the */
ph = pl + 1; /* implementation */
*wp = 0x1234;
printf("%x %x %x\n", (unsigned)*wp, (unsigned)*lp, (unsigned)*hp);


ITYM pl ph
Now, even the most perverse compiler cannot affect the behaviour of your
code: you're pointing at the two bytes of foo directly, without using
any structs and unions. The only (unavoidable) assumption (apart from the
ones explicitly stated at the beginning of your post) is about which of
the two bytes of a word is the LSB and which the MSB.


You are pointing at two bytes of foo directly, yes. There are no
guarantees that what you get out will be either 0x12 or 0x34 for
either of the byte results. The object representation of an
unsigned integer type (apart from unsigned char) is not specified
to the level where you must be able to identify definite MSB and
LSB where the value is given as (MSB << n) + LSB -- the bits could
be organised in a different order, and there could be padding bits.


Add to this the fact that my initial assumptions ('byte' is an 8-bit
unsigned type, 'word' a 16-bit unsigned type) may not be met by *any*
type on a specific implementation.

I was pondering the implications this fact yesterday night before
sleeping: it seemed to me that the only viable way to be really
portable (but my definition of being 'portable' accepts having
#define's to tweak for each implementation) was to use an 'at least
8-bit wide unsigned type' (unsigned char, namely) instead of 'byte'
and an 'at least 16-bit wide unsigned type' (unsigned short int)
instead of 'word'; a consequence of this is that I'd have to
explicitly do every arithmetic operation in modulo-256 or
modulo-65536, while this was achieved quietly with 'exactly 8-bit' and
'exactly 16-bit' types.

Do you think many implementation will optimize my %256's and %65536's
away when they're compiling for a machine with 8-bit chars and 16-bit
shorts?
The knowledge that many do would help me undertake the change much
more light-heartedly.

The problem would remain, though, of how to access the MSB and LSB of
the at-least-16-bits-wide values (which would be guaranteed by my
modulo operations never to go past 65535).
Of course I can use shifts and masks... but those don't give lvalues,
while dereferenced pointers certainly can.

It would seem that I'll have to rework the logic of my program... but
I do hope I am mistaken here.
by LjL
lj****@tiscali.it
Nov 13 '05 #9

P: n/a
In <1f**************************@posting.google.com > lj****@tiscalinet.it (Lorenzo J. Lucchini) writes:
Da*****@cern.ch (Dan Pop) wrote in message news:<bp**********@sunnews.cern.ch>...
In <1f**************************@posting.google.com > lj****@tiscalinet.it (Lorenzo J. Lucchini) writes:
>My code contains this declaration:
>
> [unions to extract MSB and LSB from something]
>
>My understanding, after browsing through previous threads on this and
>other newsgroup, is that given a variable Var of type reg, accessing
>Var.Word after having assigned values to Var.Bytes.Low and
>Var.Bytes.High or, conversely, accessing Var.Bytes.Low and
>Var.Bytes.High after having assigned a value to Var.Word, results in
>implementation-defined behavior (or possibly undefined behavior).
Accessing Var.Bytes.High and Var.Bytes.Low (after initialising Var.Word)
will always provide implementation-defined results, with no possibility
of undefined behaviour. But not the other way round.


What do you mean with "not the other way round"? That accessing
Var.Word after initializing Var.Bytes.High and Var.Bytes.Low could
result in *undefined* behavior? If so, then I'm already invoking nasal
demons.


Yup. It cannot happen in your particular case, because there is no place
for padding bits, but you cannot count on it in the general case.
>[snip]

>Intuitively, I would say that there is more than this (specifically,
>that the compiler can insert padding after the first member of the
>Bytes struct), but some articles I've read seemed to imply otherwise.


Your intuition is correct: in theory, the compiler *can* do that.
In practice, padding bytes are inserted only when they serve a *good*
purpose. Inserting padding byte(s) between Low and High would be
downright perverse, since, *in the framework of your assumptions*, no
padding bytes are needed at all: you're merely aliasing a two-byte object
by two independent bytes.


Couldn't an architecture require that my 'byte's be aligned on word
boundaries?


Nope. Not as long as your 'byte' is defined as a character type.
Then two of my 'byte's could have take two machine words, while one of
my 'word' would take only one machine word (assuming the machine word
is 16 bit).
Am I missing something the standard requires here?
Yup. Character types have no alignment requirements. And your 'byte'
must be defined as a character type if it has to be an 8-bit type.
Any other standard C89 type is at least 16 bits wide.
(Note that, while I'm calling my types 'byte' and 'word', they don't
have to correspond to a machine byte and a machine word; they only
need to be 8 bits and 16 bits wide respectively. For the record,
'byte' and 'word' try to mimic machine bytes and machine words of a
Z80)
It doesn't matter. In C89, an 8-bit type is either a character type or
nothing at all.
You don't need the union at all for this purpose:

word foo, *wp = &foo;
byte *ph, *pl;
pl = (byte *)wp; /* or the other way round, depending on the */
ph = pl + 1; /* implementation */
*wp = 0x1234;
printf("%x %x %x\n", (unsigned)*wp, (unsigned)*pl, (unsigned)*ph);

Now, even the most perverse compiler cannot affect the behaviour of your
code: you're pointing at the two bytes of foo directly, without using
any structs and unions. The only (unavoidable) assumption (apart from the
ones explicitly stated at the beginning of your post) is about which of
the two bytes of a word is the LSB and which the MSB.


This looks like an interesting solution - to be sure I'm on the safe
side, if nothing else.
But... assuming the scenario I outlined above (machine with
word-aligned bytes and such) isn't forbidden by the standard, couldn't


There is no such thing. Character types have no alignment restrictions.
it cause problems (or invoke demons) with this formulation, too?
I can see a hint that it shouldn't in the ANSI rationale, but I can't
call it more than a hint (I'm clueless, I'll admit).


No, aliasing an object by an array of unsigned char is guaranteed to work.
Also note the casts in the printf call: %x expects an unsigned value and
there is no guarantee that any of the three values will get promoted to
this type (signed int is far more probable). So, you must provide the
right type explicitly (again, the code will work without the casts as well
in practice, but you have nothing to gain by not doing the right thing).


Thank you for pointing this out; I should remember this more often
than I currently do, since while the code might work, it's nowhere
near nice to see stuff like "ffffff3" where an "f3" was expected.


This is not going to happen if the types you use are unsigned, even if
the type they are promoted to is signed.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #10

P: n/a
Dan Pop wrote:

Your observation about LSB and MSB is theoretically correct, but C
implementations on 8-bit bytes machines are supposed to use the
underlying hardware bit order, which never assigns the bits randomly.
The only known variation is the byte order, but not the order of bits
inside a byte.


<pet-peeve>

Unless the machine is able to address objects smaller
than a byte, "the order of bits inside a byte" is undetectable
and need not even be meaningful at all.

An example, from the ever-popular DeathStation 9000.
As everyone knows, some models of the DS9000 use an eleven-
bit byte, and the hardware manual says that those eleven
bits occupy all but one of the vertices of a regular
dodecahedron (the twelfth vertex is reserved for future
expansion).

Another DS9000 model also uses eleven-bit bytes, but
arranges them differently: the ten low-order bits are
stored in five four-state "fits" with the sign bit in a
single two-state device positioned between the second
and third fits:

512, 1 : leftmost fit
256, 2 : second fit
sign : bit
128, 4 : third fit
64, 8 : fourth fit
32, 16 : rightmost fit

The challenge is to devise a C program that can
determine which DS9000 model it is running on. I do not
believe the challenge can be met, and so I assert that
"the order of bits inside a byte" is a vacuous concept
on machines that don't support sub-byte addressing.

</pet-peeve>

--
Er*********@sun.com
Nov 13 '05 #11

P: n/a
In <3F***************@sun.com> Eric Sosman <Er*********@sun.com> writes:
Dan Pop wrote:

Your observation about LSB and MSB is theoretically correct, but C
implementations on 8-bit bytes machines are supposed to use the
underlying hardware bit order, which never assigns the bits randomly.
The only known variation is the byte order, but not the order of bits
inside a byte.


<pet-peeve>

Unless the machine is able to address objects smaller
than a byte, "the order of bits inside a byte" is undetectable
and need not even be meaningful at all.


When you map a wider object by an array of bytes, it helps a lot if the
order of bits inside the byte is consistent with the order of bits inside
the wider object. Both hardware designers and C implementors seem to
agree on this point.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #12

P: n/a
Dan Pop wrote:

In <3F***************@sun.com> Eric Sosman <Er*********@sun.com> writes:
Dan Pop wrote:

Your observation about LSB and MSB is theoretically correct, but C
implementations on 8-bit bytes machines are supposed to use the
underlying hardware bit order, which never assigns the bits randomly.
The only known variation is the byte order, but not the order of bits
inside a byte.


<pet-peeve>

Unless the machine is able to address objects smaller
than a byte, "the order of bits inside a byte" is undetectable
and need not even be meaningful at all.


When you map a wider object by an array of bytes, it helps a lot if the
order of bits inside the byte is consistent with the order of bits inside
the wider object. Both hardware designers and C implementors seem to
agree on this point.


<topicality degree="straying">

I think you've missed the thrust of the argument, or
perhaps the argument's thrust went wide of you. I'm saying
that (1) bit order is not detectable by any C construct I
can imagine, (2) bit order is not detectable by any CPU
instruction on a machine that lacks bit-level addressing,
(3) by Occam's Razor, that which is undetectable is better
omitted from discussion.

I offered two fanciful examples of situations where bit
order could not really be said to exist at all. For a real-
life example, consider the signals that travel between a pair
of modems. Early modems used two easily-distinguished tones
to transmit individual bits: BEEP for zero and BOOP for one,
as it were. Later, it was found that higher speeds could be
obtained by associating the bits with the transitions between
the tones rather than with the tones themselves. Modern
modems go even further: they use a whole palette of tones
(BEEP, BOOP, BRAAP, BZZZ, ...) and encode a whole bunch of
bits in each transition.

The question: What is the "bit order" of the N bits
encoded by one single BZZZ-to-BEEP transition in this scheme?
Note that all N bits leave the transmitter encoded in one
single event and arrive at the receiver the same way: they
are simultaneous and indivisible -- and I say the entire
idea of "bit order" in such a situation is meaningless.

</topicality>

--
Er*********@sun.com
Nov 13 '05 #13

P: n/a
"Simon Biber" <ne**@ralminNOSPAM.cc> wrote in message news:<3f***********************@news.optusnet.com. au>...
"Dan Pop" <Da*****@cern.ch> wrote:
When you map a wider object by an array of bytes, it helps a
lot if the order of bits inside the byte is consistent with
the order of bits inside the wider object. Both hardware
designers and C implementors seem to agree on this point.


I think Dan gets the point while Eric does not.

Even if we assume that the type unsigned short
(a) is 16 bits
(b) is two bytes
(c) has no padding bits

If I wrote:
unsigned short x = 0x1234;
unsigned char *a = (unsigned char *)&x;

a[0] need not be either 0x12 or 0x34, and a[1] need not be
either 0x12 or 0x34. This is because the value bits can be
stored in a DIFFERENT order for unsigned short compared to
unsigned char.

The value 0x1234 could be mapped into the two bytes as:
a[0] == 0x13
a[1] == 0x24
If we then replace:
a[0] = 0x26
a[1] = 0x48
And then see that
x == 0x2468
I believe that would be a conforming implementation.


Sure, an implementation conforming to my desire to strangle it.
But I can see your point. Do you have any suggestion on how to solve
my puzzle (with bit-masks being the only viable solution I suppose,
given the above)? My problem can be basically summarized as:
1) I am given a 'pointer' to a 'byte' or a 'word' (I know in advance
whether it'll be 'byte' or 'word', so I can branch accordingly). While
my 'pointers' are real pointers ATM, feel free to extend the meaning
of 'pointer' as "anything that uniquely identifies the object it
refers to".
2) I should be able to use the dereferenced pointer both as an
expression value and as an lvalue; I need to assign to it.
3) Whenever I assign to a dereferenced pointer, the value of the
dereferenced pointer itself changes (obviously), but there is at least
another dereferenced pointer among those I can get at point 1) that
changes simultaneously. Specifically, if I assign to a deref. pointer
to 'word', two deref. pointers to two 'byte's will change; if I assign
to a deref. pointer to 'byte', one deref. pointer to 'word' will
change.

In real life, in case it's easier to understand, this translates to:
I am simulating a processor that has some registers called B, C, D, E,
H and L. These are 8-bit. The processor, however, can also treat them
as the 16-bit pairs BC, DE and HL.
Given a simulated machine instruction (which by definition tells me
whether it wants to access a register or a register pair), I can call
a function(said instruction) that returns me a pointer to the operand
- that is, depending on the instruction, a pointer to an 8-bit or to a
16-bit value.
I then use the dereferenced pointer how I see fit.

With a solution that has (B, C, D, E, H) and (BC, DE, HL) as separate
variables, when one group gets modified, the other group does not need
to be synchronized immediately, it can wait the next loop iteration,
if this helps.

Of course, I do have (more than) a solution: for example, I could
simply use an 'assign' function, instead of the normal C assignment
operator, that takes care of synchronizing the values.
But it's not a solution I like too much, and I was hoping someone here
could find a more elegant one (or, dare I say, a 'more efficient' one,
with the word 'efficiency' being defined vaguely enough - say, as few
dumb synchronize-thee function calls as possible).
On a side note, anyone who has the temper to tell me "keep using your
pointers, no implementation in the next 50 years will mess them up"?
:-)
by LjL
lj****@tiscali.it
Nov 13 '05 #14

P: n/a
On 20 Nov 2003 04:11:38 -0800, lj****@tiscalinet.it (Lorenzo J.
Lucchini) wrote:
Sure, an implementation conforming to my desire to strangle it.
But I can see your point. Do you have any suggestion on how to solve
my puzzle (with bit-masks being the only viable solution I suppose,
given the above)?


Sure, but it's OT here. Stop worrying about it, and include a comment
saying "If this should fail when ported to another implementation,
please call Simon."

--
Al Balmer
Balmer Consulting
re************************@att.net
Nov 13 '05 #15

P: n/a
In <1f**************************@posting.google.com > lj****@tiscalinet.it (Lorenzo J. Lucchini) writes:
In real life, in case it's easier to understand, this translates to:
I am simulating a processor that has some registers called B, C, D, E,
H and L. These are 8-bit. The processor, however, can also treat them
as the 16-bit pairs BC, DE and HL.
Given a simulated machine instruction (which by definition tells me
whether it wants to access a register or a register pair), I can call
a function(said instruction) that returns me a pointer to the operand
- that is, depending on the instruction, a pointer to an 8-bit or to a
16-bit value.
I then use the dereferenced pointer how I see fit.

With a solution that has (B, C, D, E, H) and (BC, DE, HL) as separate
variables, when one group gets modified, the other group does not need
to be synchronized immediately, it can wait the next loop iteration,
if this helps.


The "no assumptions" solution is to simply use an array of unsigned char
for storing the values of the individual registers, in the order
B, C, D, E, H, L, padding or F, A. This order is made obvious by the
Z80/8080 instruction encoding.

When you need a register pair, you compute it on the fly:

words[DE] = (unsigned)regs[D] << 8 + regs[E];

When an instruction has modified a register pair (few Z80 and even fewer
8080 instructions can do that), you update the individual registers:

regs[D] = words[DE] >> 8;
regs[E] = words[DE] & 0xFF;

I also believe that this approach will actually simplify the overall
coding of your simulator, because it allows using the register fields
inside the opcodes to be used as indices in the array, so you never have
to figure out what is the variable corresponding to a value of 2 in the
register field, you simply use 2 as an index in the registers array.
The instruction decoding becomes a piece of cake, this way.

The words array doesn't have to be kept in sync at all, except when
simulating a word instruction or indirect addressing via HL (and even
then, only the relevant elements have to be synchronised).

Mapping the registers by words doesn't work well for little endian
platforms, because the right way of doing it (in the framework of your
initial assumptions) would be:

unsigned short words[4];
unsigned char *regs = (unsigned char *)words;

But this would map B into the LSB of BC and C into the MSB of BC, which
is wrong. And you really want to store the registers in the order
defined above.

The union approach may look tempting, but it doesn't fit very well into
the scheme of a simple and efficient emulator. Using the right data
structure and format for the registers is essential for the rest of the
code of the simulator and I believe that my solution, apart from relying
on no assumptions, is also optimal for the rest of the program.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #16

P: n/a
Da*****@cern.ch (Dan Pop) wrote in message news:<bp**********@sunnews.cern.ch>...
In <1f**************************@posting.google.com > lj****@tiscalinet.it (Lorenzo J. Lucchini) writes:
[Registers and register pairs on a Z80 and how to handle a
simulation of them in C]
The "no assumptions" solution is to simply use an array of unsigned char
for storing the values of the individual registers, in the order
B, C, D, E, H, L, padding or F, A. This order is made obvious by the
Z80/8080 instruction encoding.

When you need a register pair, you compute it on the fly:

words[DE] = (unsigned)regs[D] << 8 + regs[E];

When an instruction has modified a register pair (few Z80 and even fewer
8080 instructions can do that), you update the individual registers:

regs[D] = words[DE] >> 8;
regs[E] = words[DE] & 0xFF;

I also believe that this approach will actually simplify the overall
coding of your simulator, because it allows using the register fields
inside the opcodes to be used as indices in the array, so you never have
to figure out what is the variable corresponding to a value of 2 in the
register field, you simply use 2 as an index in the registers array.
The instruction decoding becomes a piece of cake, this way.


While it looks like this approach will require some quite extensive
reworking of my code - which I hoped to avoid -, it does look
extremely interesting. I'll do it.

Clearly, I knew perfectly well that instructions have a register
field, which in turn means it's 'obvious' that there is intrinsicly a
preferred order for the registers... Nevertheless, I don't think I
would have ever thought of putting them in an array. I cannot but
thank you for the suggestion.

I'll probably submit some code for review, when it's in a better
shape.
[snip]


by LjL
lj****@tiscali.it
Nov 13 '05 #17

P: n/a
In <1f**************************@posting.google.com > lj****@tiscalinet.it (Lorenzo J. Lucchini) writes:
Da*****@cern.ch (Dan Pop) wrote in message news:<bp**********@sunnews.cern.ch>...
In <1f**************************@posting.google.com > lj****@tiscalinet.it (Lorenzo J. Lucchini) writes:
> [Registers and register pairs on a Z80 and how to handle a
> simulation of them in C]


The "no assumptions" solution is to simply use an array of unsigned char
for storing the values of the individual registers, in the order
B, C, D, E, H, L, padding or F, A. This order is made obvious by the
Z80/8080 instruction encoding.

When you need a register pair, you compute it on the fly:

words[DE] = (unsigned)regs[D] << 8 + regs[E];

When an instruction has modified a register pair (few Z80 and even fewer
8080 instructions can do that), you update the individual registers:

regs[D] = words[DE] >> 8;
regs[E] = words[DE] & 0xFF;

I also believe that this approach will actually simplify the overall
coding of your simulator, because it allows using the register fields
inside the opcodes to be used as indices in the array, so you never have
to figure out what is the variable corresponding to a value of 2 in the
register field, you simply use 2 as an index in the registers array.
The instruction decoding becomes a piece of cake, this way.


While it looks like this approach will require some quite extensive
reworking of my code - which I hoped to avoid -, it does look
extremely interesting. I'll do it.


To show you how it works, here's a function covering a quarter of the 8080
opcode space:

#define BC 0
#define DE 1
#define HL 2

#define COMP(rp) ((unsigned)regs[rp * 2] << 8 + regs[rp * 2 + 1])
#define SYNC(rp) (words[rp] = COMP(rp))
#define SPILL(rp) (regs[rp * 2] = words[rp] >> 8,\
regs[rp * 2 + 1] = words[rp] & 0xFF)

unsigned char regs[8], shadowregs[8], mem[0x10000];
unsigned words[3], ix, iy, pc, sp;

void ld8rr(int opcode)
{
int dest = (opcode & 0x38) >> 3;
int src = opcode & 7;

assert(opcode != 0x76); /* that's the HALT opcode! */

if (dest != 6 && src != 6) {
regs[dest] = regs[src];
return;
}

if (dest == 6) {
mem[COMP(HL)] = regs[src];
return;
}

regs[dest] = mem[COMP(HL)];
return;
}

COMP merely computes the value of a register pair, while SYNC also
syncronizes the respective element of the words array. SPILL is used
to update the regs when a 16-bit instruction has changed the value of a
register pair.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #18

P: n/a
In article <news:PT*****************@newsread1.news.pas.earth link.net>
mike gillmore <rm********@hotmail.com> writes:
I have used this little program for many years to discover the machine
endian-ness. Use it in good health.
[most of it snipped, but here is the output line]
printf( " %x %s %x isBigEndian = %s(%d)\n",
*firstBytePtr, isBigEndian ? "!=" : "==", ( unsigned char )testValue,
isBigEndian ? "TRUE" : "FALSE", isBigEndian );


So, which endian-ness does this claim for the PDP-11? Which one
*should* it claim? The PDP-11 has little-endian bytes-in-16-bit-words
and big-endian 16-bit-words-in-32-bit-longs (and also in floats).

If you give up control of endian-ness and choose instead to match
the byte order of your local machine(s), be aware that there are
more than two.
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (4039.22'N, 11150.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 13 '05 #19

P: n/a
In <bp**********@elf.torek.net> Chris Torek <no****@torek.net> writes:
If you give up control of endian-ness and choose instead to match
the byte order of your local machine(s), be aware that there are
more than two.


However, the large popularity of the terms "big endian" and "little
endian" is a strong hint that all the others are history.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Nov 13 '05 #20

This discussion thread is closed

Replies have been disabled for this discussion.