By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,928 Members | 1,200 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,928 IT Pros & Developers. It's quick & easy.

How to get around the "long" 32/64-bit mess?

P: n/a
I have a question that might have been asked before, but I have not been
able to find anything via groups.google.com or any web search that is
definative.

I am writing an evolutionary AI application in C++. I want to use 32-bit
integers, and I want the application to be able to save it's state in a
portable fashion.

The obvious choice would be to use the "long" type, as it is deined to be
at least 32 bits in size. However, the definition says "at least" and I
believe that on some platforms (gcc/AMD64???) long ends up being a 64-bit
integer instead. In other words, there are cases where sizeof(long) ==
sizeof(long long) if I'm not mistaken.

The problem is this: it's an evolutionary AI application, so if I use long
as the data type and run it on a platform where sizeof(long) == 8, then it
will *evolve* to take advantage of this. Then, if I save it's state, this
state will either be invalid or will get truncated into 32-bit ints if I
then reload this state on a 32-bit platform where sizeof(long) == 4. I
am trying to make this program *fast*, and so I want to avoid having to
waste cycles doing meaningless "& 0xffffffff" operations all over the
place or other nasty hacks to get around this type madness.

The nasty solution that I am trying to avoid is a typedefs.h type file
that defines a bunch of architecture-dependent typedefs. Yuck. Did I
mention that this C++ code is going to be compiled into a library that
will then get linked with other applications? I don't want the other
applications to have to compile with -DARCH_WHATEVER to get this library
to link correctly with it's typedefs.h file nastiness.

So I'd like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?

Jul 22 '05 #1
Share this Question
Share on Google+
17 Replies


P: n/a
Adam Ierymenko wrote:
I have a question that might have been asked before, but I have not been
able to find anything via groups.google.com or any web search that is
definative.

I am writing an evolutionary AI application in C++. I want to use 32-bit
integers, and I want the application to be able to save it's state in a
portable fashion.

The obvious choice would be to use the "long" type, as it is deined to be
at least 32 bits in size. However, the definition says "at least" and I
believe that on some platforms (gcc/AMD64???) long ends up being a 64-bit
integer instead. In other words, there are cases where sizeof(long) ==
sizeof(long long) if I'm not mistaken.

The problem is this: it's an evolutionary AI application, so if I use long
as the data type and run it on a platform where sizeof(long) == 8, then it
will *evolve* to take advantage of this. Then, if I save it's state, this
state will either be invalid or will get truncated into 32-bit ints if I
then reload this state on a 32-bit platform where sizeof(long) == 4. I
am trying to make this program *fast*, and so I want to avoid having to
waste cycles doing meaningless "& 0xffffffff" operations all over the
place or other nasty hacks to get around this type madness.

The nasty solution that I am trying to avoid is a typedefs.h type file
that defines a bunch of architecture-dependent typedefs. Yuck. Did I
mention that this C++ code is going to be compiled into a library that
will then get linked with other applications? I don't want the other
applications to have to compile with -DARCH_WHATEVER to get this library
to link correctly with it's typedefs.h file nastiness.

So I'd like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?


Well you can do many things. For example to save its state converting
the values to string representations. Or some other tricks. In most
32-bit platforms however int is 32-bit so you can use this type on that
assumption.
Since you are looking for saved-data portability, you will *have to*
save in text mode always, since in one implementation the binary
representation of a type can be different from that of another
implementation.
And place the various checks on loading then data, that the stored
values do not exceed the limits of the ranges supported in the current
platform (or place the checks when you save the data that the data do
not exceed the 32-bit limitation).
Or create a class. In an case for data-portability, the data must be
stored in text mode.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #2

P: n/a
Ioannis Vranos wrote:
Adam Ierymenko wrote:
....


Well you can do many things. For example to save its state converting
the values to string representations. Or some other tricks. In most
32-bit platforms however int is 32-bit so you can use this type on that
assumption.
Since you are looking for saved-data portability, you will *have to*
save in text mode always, since in one implementation the binary
representation of a type can be different from that of another
implementation.
In practice, all you need is somthing that will convert to a "known"
format. ASCII numeric strings are a "known" format but it could be
terribly inefficient.

See:
http://groups.google.com/groups?hl=e...iani.ws&rnum=9

For an example of how to deal with endianness issues but could easily be
extended to deal with with other formats. However it's highly unlikely
that anyone will ever need anything different.


And place the various checks on loading then data, that the stored
values do not exceed the limits of the ranges supported in the current
platform (or place the checks when you save the data that the data do
not exceed the 32-bit limitation).
Or create a class. In an case for data-portability, the data must be
stored in text mode.

....
Jul 22 '05 #3

P: n/a
Gianni Mariani wrote:
In practice,

Or better: In theory.
all you need is somthing that will convert to a "known"
format. ASCII numeric strings are a "known" format but it could be
terribly inefficient.

See:
http://groups.google.com/groups?hl=e...iani.ws&rnum=9
For an example of how to deal with endianness issues but could easily be
extended to deal with with other formats. However it's highly unlikely
that anyone will ever need anything different.


Yes, however in most cases ASCII is sufficient.

However if in a particular case this is inefficient, one can use other
formats. For example, one can use a library to save his data in XML.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #4

P: n/a
Ioannis Vranos wrote:
....

Yes, however in most cases ASCII is sufficient.

However if in a particular case this is inefficient, one can use other
formats. For example, one can use a library to save his data in XML.


I don't see how you can make this jump (that xml is more efficient that
ascii).

Having writtent an xml parser (subset actually), I can say that it is
far from "efficient".
Jul 22 '05 #5

P: n/a
Gianni Mariani wrote:
I don't see how you can make this jump (that xml is more efficient that
ascii).

Having writtent an xml parser (subset actually), I can say that it is
far from "efficient".

I was talking about using third party libraries for that. If you mean
that the use of such libraries is not efficient, well there are
libraries that there are. For example .NET provides XML facilities that
make it easy to write your data in XML.

I am sure that there are others good ones too.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #6

P: n/a
Gianni Mariani wrote:
I don't see how you can make this jump (that xml is more efficient that
ascii).

Having writtent an xml parser (subset actually), I can say that it is
far from "efficient".


And anything depends on what you mean by "efficient" in your use of the
term.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #7

P: n/a
Ioannis Vranos wrote:

I was talking about using third party libraries for that. If you mean
that the use of such libraries is not efficient, well there are
libraries that there are. For example .NET provides XML facilities that
make it easy to write your data in XML.


Isn't XML a text based scheme?

In the practical world I code in we have this data structure
that holds about 200Mb of floating point numbers. I wonder
what size of XML file that would create.

Jul 22 '05 #8

P: n/a
lilburne wrote:
Isn't XML a text based scheme?

In the practical world I code in we have this data structure that holds
about 200Mb of floating point numbers. I wonder what size of XML file
that would create.


Always the implementation decisions we make, depend on our needs and our
constraints. :-) For example you can use compression.
Anyway this discussion is not practical any more. There is not a
general, silver bullet solution for everything.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #9

P: n/a
Ioannis Vranos wrote:
Gianni Mariani wrote:
I don't see how you can make this jump (that xml is more efficient
that ascii).

Having writtent an xml parser (subset actually), I can say that it is
far from "efficient".


I was talking about using third party libraries for that. If you mean
that the use of such libraries is not efficient, well there are
libraries that there are. For example .NET provides XML facilities that
make it easy to write your data in XML.

I am sure that there are others good ones too.

I don't think we are talking about the same thing.

Reading/writing XML is relatively computationally expensive compared to
reading/writing ascii which in turn is much more computationally
expensive that reading/writing binary data.

If you find contrary examples, I suggest you look more closely for
poorly written or structured code (if performance is important to you
that is !).
Jul 22 '05 #10

P: n/a
Gianni Mariani wrote:
I don't think we are talking about the same thing.

Yes, that's what I said in another message. The discussion is too general.
Reading/writing XML is relatively computationally expensive compared to
reading/writing ascii which in turn is much more computationally
expensive that reading/writing binary data.

Yes we agree on that.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #11

P: n/a
Ian
Adam Ierymenko wrote:

So I'd like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?


Be explicit, use int32_t or uint32_t.

Doesn't help with endian issues, the other postings on this thread cover
that, but inside the code these fixed size types are your friend.

Ian
Jul 22 '05 #12

P: n/a
Ian wrote:
Be explicit, use int32_t or uint32_t.

However these are not part of the current C++ standard.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #13

P: n/a
On Sun, 15 Aug 2004 01:17:06 +0300, Ioannis Vranos wrote:
Well you can do many things. For example to save its state converting
the values to string representations. Or some other tricks. In most
32-bit platforms however int is 32-bit so you can use this type on that
assumption.
Since you are looking for saved-data portability, you will *have to*
save in text mode always, since in one implementation the binary
representation of a type can be different from that of another
implementation.
And place the various checks on loading then data, that the stored
values do not exceed the limits of the ranges supported in the current
platform (or place the checks when you save the data that the data do
not exceed the 32-bit limitation).
Or create a class. In an case for data-portability, the data must be
stored in text mode.


This doesn't solve the problem. If the 64-bit platform wrote 64-bit
ASCII values, they would still get truncated when they were loaded
into 32-bit longs on the 32-bit platform.

I think there's no choice but to use uint32_t types, despite the fact
that they are not yet standard on C++ yet. I could always define them
on platforms that didn't support them, but I know that Linux does and
that's the development platform.

Jul 22 '05 #14

P: n/a
Adam Ierymenko wrote:
This doesn't solve the problem. If the 64-bit platform wrote 64-bit
ASCII values, they would still get truncated when they were loaded
into 32-bit longs on the 32-bit platform.

I think there's no choice but to use uint32_t types, despite the fact
that they are not yet standard on C++ yet. I could always define them
on platforms that didn't support them, but I know that Linux does and
that's the development platform.


Yes using a typedef is a good choice, and in most cases that could be an
int.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #15

P: n/a
Ian
Ioannis Vranos wrote:
Ian wrote:
Be explicit, use int32_t or uint32_t.


However these are not part of the current C++ standard.


OK, C99, but just about every current compiler has them. If not, use
them as a guide and roll your own.

Ian
Jul 22 '05 #16

P: n/a

"Ioannis Vranos" <iv*@guesswh.at.grad.com> skrev i en meddelelse
news:cf**********@ulysses.noc.ntua.gr...
Gianni Mariani wrote:
I don't see how you can make this jump (that xml is more efficient that
ascii).

Having writtent an xml parser (subset actually), I can say that it is
far from "efficient".

I was talking about using third party libraries for that. If you mean
that the use of such libraries is not efficient, well there are
libraries that there are. For example .NET provides XML facilities that
make it easy to write your data in XML.


Well... it requires just a quick glance at the XML specification to realize
that whatever virtues that standard might have, speed is not one of them.
I am sure that there are others good ones too.
Probably. But there is no way to write an XML-parser that is faster than a
simple conversion to ascii.

Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys

/Peter
Jul 22 '05 #17

P: n/a

"Adam Ierymenko" <ap*@n0junkmal1l.clearlight.com> wrote in message
news:pa****************************@n0junkmal1l.cl earlight.com...
I have a question that might have been asked before, but I have not been
able to find anything via groups.google.com or any web search that is
definative.

I am writing an evolutionary AI application in C++. I want to use 32-bit
integers, and I want the application to be able to save it's state in a
portable fashion.

The obvious choice would be to use the "long" type, as it is deined to be
at least 32 bits in size. However, the definition says "at least" and I
believe that on some platforms (gcc/AMD64???) long ends up being a 64-bit
integer instead. In other words, there are cases where sizeof(long) ==
sizeof(long long) if I'm not mistaken.

The problem is this: it's an evolutionary AI application, so if I use long
as the data type and run it on a platform where sizeof(long) == 8, then it
will *evolve* to take advantage of this. Then, if I save it's state, this
state will either be invalid or will get truncated into 32-bit ints if I
then reload this state on a 32-bit platform where sizeof(long) == 4. I
am trying to make this program *fast*, and so I want to avoid having to
waste cycles doing meaningless "& 0xffffffff" operations all over the
place or other nasty hacks to get around this type madness.

The nasty solution that I am trying to avoid is a typedefs.h type file
that defines a bunch of architecture-dependent typedefs. Yuck. Did I
mention that this C++ code is going to be compiled into a library that
will then get linked with other applications? I don't want the other
applications to have to compile with -DARCH_WHATEVER to get this library
to link correctly with it's typedefs.h file nastiness.

So I'd like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?


Typedefs might be useful. But what you really want is to define an external
data representation which is fixed on every playform. You could follow
either XML or ASN.1 to do this. Or cook up your own data format.
Jul 22 '05 #18

This discussion thread is closed

Replies have been disabled for this discussion.