Hi,
I am trying to make sense of this endian problem and so far, it is
still Greek to me.
I am have some files that have stored lat and lon data in binary
format. The data was originally floats (32) and written in with the
most significant byte in the lowest address (big endian). I have an
Intel processor and working in Mandrake. In Python, I have solved this
with a built in function, but in C, I am at a loss. What I have grasp
so far is that the data ought to read in an unsigned char array and
then I have to look at the data bit by bit and then swap them. It is
this part that looses me.
I have seen this example for an int 32:
unsigned char j[4];
uint32_t o;
read(descriptor, j, 4); /* error checking ommitted for simplicity */
o = j[0]; o<<=8;
o | = j[1]; o<<=8;
o | = j[2]; o<<=8;
o | =j[3];
Could someone please explain what this person did, or, explain how to
do this byte swap?
Sincerely,
Sheldon 9 1641
On 23 Oct 2006 05:38:42 -0700,
Sheldon <sh******@gmail.comwrote
in Msg. <11**********************@h48g2000cwc.googlegroups .com>
unsigned char j[4];
uint32_t o;
read(descriptor, j, 4); /* error checking ommitted for simplicity */
o = j[0]; o<<=8;
o | = j[1]; o<<=8;
o | = j[2]; o<<=8;
o | =j[3];
Could someone please explain what this person did, or, explain how to
do this byte swap?
Assuming that you know the meaning of the binary-or and the
bit-shift operator, what is it thar you're having trouble with?
In fact the code is written a bit awkwardly; I'd have done it like
this:
o = (j[0] << 24) | (j[1] << 16) | (j[2] << 8) | j[3];
In your case you'd have to do it differently because the bitwise
operators won't make much sense with floats. You'd just swap the
four bytes in your array and cast the result to the appropriate
type -- assuming that the binary representation of floats on the
machine that generated the numbers is the same as on the machine
you read them with.
robert
Robert Latest skrev:
On 23 Oct 2006 05:38:42 -0700,
Sheldon <sh******@gmail.comwrote
in Msg. <11**********************@h48g2000cwc.googlegroups .com>
unsigned char j[4];
uint32_t o;
read(descriptor, j, 4); /* error checking ommitted for simplicity */
o = j[0]; o<<=8;
o | = j[1]; o<<=8;
o | = j[2]; o<<=8;
o | =j[3];
Could someone please explain what this person did, or, explain how to
do this byte swap?
Assuming that you know the meaning of the binary-or and the
bit-shift operator, what is it thar you're having trouble with?
In fact the code is written a bit awkwardly; I'd have done it like
this:
o = (j[0] << 24) | (j[1] << 16) | (j[2] << 8) | j[3];
In your case you'd have to do it differently because the bitwise
operators won't make much sense with floats. You'd just swap the
four bytes in your array and cast the result to the appropriate
type -- assuming that the binary representation of floats on the
machine that generated the numbers is the same as on the machine
you read them with.
robert
I cannot check to see if the binary representation on my PC and the IBM
the files were written on are the same but they should be ok since this
conversion was done on my machine before using IDL and Python.
What I don't understand is that four bytes are read in an array and
then shifted: byte 1 to the left by 24, byte 2 by 16, byte 3 by 8 and
then byte 4 remains. What happened? Does this shifting pushes the bytes
forward so that at the end byte 4 is in the first 4 addresses?
I would be reading in 2D array and how then would I do this bytewise
shifting for each position?
/Sheldon
Robert Latest wrote:
Sheldon <sh******@gmail.comwrote
>unsigned char j[4]; uint32_t o; read(descriptor, j, 4); /* error checking ommitted for simplicity */ o = j[0]; o<<=8; o | = j[1]; o<<=8; o | = j[2]; o<<=8; o | =j[3];
Could someone please explain what this person did, or, explain how to do this byte swap?
Assuming that you know the meaning of the binary-or and the
bit-shift operator, what is it thar you're having trouble with?
In fact the code is written a bit awkwardly; I'd have done it like
this:
o = (j[0] << 24) | (j[1] << 16) | (j[2] << 8) | j[3];
Consider the result when CHAR_BIT is 9, for instance.
--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>
Sheldon wrote:
Hi,
I am trying to make sense of this endian problem and so far, it is
still Greek to me.
I am have some files that have stored lat and lon data in binary
format. The data was originally floats (32) and written in with the
most significant byte in the lowest address (big endian). I have an
Intel processor and working in Mandrake. In Python, I have solved this
with a built in function, but in C, I am at a loss. What I have grasp
so far is that the data ought to read in an unsigned char array and
then I have to look at the data bit by bit and then swap them. It is
this part that looses me.
I have seen this example for an int 32:
unsigned char j[4];
uint32_t o;
read(descriptor, j, 4); /* error checking ommitted for simplicity */
o = j[0]; o<<=8;
o | = j[1]; o<<=8;
o | = j[2]; o<<=8;
o | =j[3];
Could someone please explain what this person did, or, explain how to
do this byte swap?
Sincerely,
Sheldon
Take the 8 bits of the first byte and place them in the lower 8 bits of the
32 bit unsigned 32 bit type.
Shift the whole lot left 8 bits so that they now sit in bits 8-15, with bits
0 to 7 now zero.
Now expand the second byte into a temporary uint32_t so the bits are in the
lowest 8 bits and merge this temporary one onto the first. So now the
first byte is in bits 8-15 and the second byte is in bits 0-7.
Repeat a couple more times and you get an unsigned 32 bit type with the
first byte in the top 8 bits and the last byte in the bottom 8 bits.
(and don't worry about whether the uint32 is storing its information in LSB
or MSB format; it doesn't matter)
--
Bill Medland
Bill Medland skrev:
Sheldon wrote:
Hi,
I am trying to make sense of this endian problem and so far, it is
still Greek to me.
I am have some files that have stored lat and lon data in binary
format. The data was originally floats (32) and written in with the
most significant byte in the lowest address (big endian). I have an
Intel processor and working in Mandrake. In Python, I have solved this
with a built in function, but in C, I am at a loss. What I have grasp
so far is that the data ought to read in an unsigned char array and
then I have to look at the data bit by bit and then swap them. It is
this part that looses me.
I have seen this example for an int 32:
unsigned char j[4];
uint32_t o;
read(descriptor, j, 4); /* error checking ommitted for simplicity */
o = j[0]; o<<=8;
o | = j[1]; o<<=8;
o | = j[2]; o<<=8;
o | =j[3];
Could someone please explain what this person did, or, explain how to
do this byte swap?
Sincerely,
Sheldon
Take the 8 bits of the first byte and place them in the lower 8 bits of the
32 bit unsigned 32 bit type.
Shift the whole lot left 8 bits so that they now sit in bits 8-15, with bits
0 to 7 now zero.
Now expand the second byte into a temporary uint32_t so the bits are in the
lowest 8 bits and merge this temporary one onto the first. So now the
first byte is in bits 8-15 and the second byte is in bits 0-7.
Repeat a couple more times and you get an unsigned 32 bit type with the
first byte in the top 8 bits and the last byte in the bottom 8 bits.
(and don't worry about whether the uint32 is storing its information in LSB
or MSB format; it doesn't matter)
--
Bill Medland
I see. My next question is really silly, but here goes: when I read the
array into a 15x15 array of unsighned char and then take the first
position array[0][0], how do i then break this up into 4x4 bytes to do
the swapping?
/Sheldon
On Mon, 23 Oct 2006 09:38:23 -0400,
CBFalconer <cb********@yahoo.comwrote
in Msg. <45***************@yahoo.com>
Robert Latest wrote:
>Sheldon <sh******@gmail.comwrote
>>unsigned char j[4]; uint32_t o; read(descriptor, j, 4); /* error checking ommitted for simplicity */ o = j[0]; o<<=8; o | = j[1]; o<<=8; o | = j[2]; o<<=8; o | =j[3];
Could someone please explain what this person did, or, explain how to do this byte swap?
Assuming that you know the meaning of the binary-or and the bit-shift operator, what is it thar you're having trouble with?
In fact the code is written a bit awkwardly; I'd have done it like this:
o = (j[0] << 24) | (j[1] << 16) | (j[2] << 8) | j[3];
Consider the result when CHAR_BIT is 9, for instance.
Given that this is about endianness of floating point numbers
portability wasn't a prime concern to begin with.
robert
CBFalconer wrote:
Robert Latest wrote:
>Sheldon <sh******@gmail.comwrote
>>unsigned char j[4]; uint32_t o; read(descriptor, j, 4); /* error checking ommitted for simplicity */ o = j[0]; o<<=8; o | = j[1]; o<<=8; o | = j[2]; o<<=8; o | =j[3];
Could someone please explain what this person did, or, explain how to do this byte swap?
Assuming that you know the meaning of the binary-or and the bit-shift operator, what is it thar you're having trouble with?
In fact the code is written a bit awkwardly; I'd have done it like this:
o = (j[0] << 24) | (j[1] << 16) | (j[2] << 8) | j[3];
Consider the result when CHAR_BIT is 9, for instance.
If we're going to be that pedantic doesn't it all fall apart if an int is 16
bit?
--
Bill Medland
Sheldon wrote:
I am trying to make sense of this endian problem and so far, it is
still Greek to me.
I am have some files that have stored lat and lon data in binary
format. The data was originally floats (32) and written in with the
most significant byte in the lowest address (big endian). I have an
Intel processor and working in Mandrake. In Python, I have solved this
with a built in function, but in C, I am at a loss. What I have grasp
so far is that the data ought to read in an unsigned char array and
then I have to look at the data bit by bit and then swap them. It is
this part that looses me.
Actually, you make the adjustment a byte at a time, not bit at a time.
I have seen this example for an int 32:
unsigned char j[4];
uint32_t o;
read(descriptor, j, 4); /* error checking ommitted for simplicity */
o = j[0]; o<<=8;
o | = j[1]; o<<=8;
o | = j[2]; o<<=8;
o | =j[3];
Could someone please explain what this person did, or, explain how to
do this byte swap?
This code computes an unsigned 32-bit value, assuming that the input is
big-endian with 8-bit bytes. It works on either big-, little-, or
mixed-endian processor. j[0] is the mist significant byte, so it gets
shifted left by the code 24 bits.
To create your float, assuming the same basic fp format, you can use the
above code to convert the input to a 32-bit unsigned int, then map it to
a float by storing into a union of float and uint32_t and retrieving the
float value:
float g;
union {
float f;
uint32_t ui;
} u;
u.ui = o;
g = u.f;
or by casting an uint32_t pointer to a float pointer:
g = (float*) &o;
These methods invoke behavior that is unspecified and undefined by
Standard C, but can work in your application. You should, of course,
verify correct operation. If I were writing such code I would surround
the code with conditional preprocessor directives which ensure that the
compiler and target processor match the ones which had been verified.
Other combinations would result in an #error directive.
--
Thad
Sheldon wrote:
Hi,
I am trying to make sense of this endian problem and so far, it is
still Greek to me.
I am have some files that have stored lat and lon data in binary
format. The data was originally floats (32) and written in with the
most significant byte in the lowest address (big endian).
this is generally a bad idea. If you can avoid it don't store floating
point numbers in a file in binary. Use a textual representation.
Consider XDR or ASN.1.
If it has to be binary and you're writing on one platform and reading
on another, you better hope that they are both IEEE 754 (very likely)
so you only have endianess to worry about. Write out some known values
so you can work out the ordering. At the reveiving end read the values
into an array of unsigned char, reorder the octets (bytes) then build
the floating point value. A cast may suffice.
<snip>
--
Nick Keighley This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Joe C |
last post by:
I have some code that performs bitwise operations on files. I'm trying to
make the code portable on different endian systems. This is not work/school
related...just trying to learn/understand.
...
|
by: Perception |
last post by:
Hello all,
If I have a C-like data structure such that
struct Data {
int a; //16-bit value
char; //3 ASCII characters
int b; //32-bit value
int c; //24-bit value
}
|
by: ThazKool |
last post by:
I want to see if this code works the way it should on a Big-Endian
system. Also if anyone has any ideas on how determine this at
compile-time so that I use the right decoding or encoding...
|
by: bhatia |
last post by:
Hello all,
If I have a C-like data structure such that
struct Data {
int a; //16-bit value
char; //3 ASCII characters
int b; //32-bit value
int c; //24-bit value
}
|
by: raghu |
last post by:
Is it possible to know whether a system is little endian or big endian
by writing a C program? If so, can anyone please give me the idea to
approach...
Thanks a ton.
Regards,
Raghu
|
by: ma740988 |
last post by:
Data stored on a storage device is byte swapped. The data is big
endian and my PC is little. At issue: There's a composite type ( a
header ) at the front of the files that I'm trying to read in....
|
by: RRick |
last post by:
This was a question that showed up in a job interview once. (And to answer your next question: No, I didn't :))
Write a subroutine that returns a bool on whether a system supports big endian...
|
by: guthena |
last post by:
Write a small C program to determine whether a machine's type is
little-endian or big-endian.
|
by: Niranjan |
last post by:
I have this program :
void main()
{
int i=1;
if((*(char*)&i)==1)
printf("The machine is little endian.");
else
printf("The machine is big endian.");
}
|
by: Charles Arthur |
last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
|
by: aa123db |
last post by:
Variable and constants
Use var or let for variables and const fror constants.
Var foo ='bar';
Let foo ='bar';const baz ='bar';
Functions
function $name$ ($parameters$) {
}
...
|
by: ryjfgjl |
last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
|
by: emmanuelkatto |
last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud.
Please let me know.
Thanks!
Emmanuel
|
by: BarryA |
last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Sonnysonu |
last post by:
This is the data of csv file
1 2 3
1 2 3
1 2 3
1 2 3
2 3
2 3
3
the lengths should be different i have to store the data by column-wise with in the specific length.
suppose the i have to...
|
by: Hystou |
last post by:
There are some requirements for setting up RAID:
1. The motherboard and BIOS support RAID configuration.
2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers,...
| |