473,406 Members | 2,847 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,406 software developers and data experts.

How to get around the "long" 32/64-bit mess?

I have a question that might have been asked before, but I have not been
able to find anything via groups.google.com or any web search that is
definative.

I am writing an evolutionary AI application in C++. I want to use 32-bit
integers, and I want the application to be able to save it's state in a
portable fashion.

The obvious choice would be to use the "long" type, as it is deined to be
at least 32 bits in size. However, the definition says "at least" and I
believe that on some platforms (gcc/AMD64???) long ends up being a 64-bit
integer instead. In other words, there are cases where sizeof(long) ==
sizeof(long long) if I'm not mistaken.

The problem is this: it's an evolutionary AI application, so if I use long
as the data type and run it on a platform where sizeof(long) == 8, then it
will *evolve* to take advantage of this. Then, if I save it's state, this
state will either be invalid or will get truncated into 32-bit ints if I
then reload this state on a 32-bit platform where sizeof(long) == 4. I
am trying to make this program *fast*, and so I want to avoid having to
waste cycles doing meaningless "& 0xffffffff" operations all over the
place or other nasty hacks to get around this type madness.

The nasty solution that I am trying to avoid is a typedefs.h type file
that defines a bunch of architecture-dependent typedefs. Yuck. Did I
mention that this C++ code is going to be compiled into a library that
will then get linked with other applications? I don't want the other
applications to have to compile with -DARCH_WHATEVER to get this library
to link correctly with it's typedefs.h file nastiness.

So I'd like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?

Jul 22 '05 #1
17 3907
Adam Ierymenko wrote:
I have a question that might have been asked before, but I have not been
able to find anything via groups.google.com or any web search that is
definative.

I am writing an evolutionary AI application in C++. I want to use 32-bit
integers, and I want the application to be able to save it's state in a
portable fashion.

The obvious choice would be to use the "long" type, as it is deined to be
at least 32 bits in size. However, the definition says "at least" and I
believe that on some platforms (gcc/AMD64???) long ends up being a 64-bit
integer instead. In other words, there are cases where sizeof(long) ==
sizeof(long long) if I'm not mistaken.

The problem is this: it's an evolutionary AI application, so if I use long
as the data type and run it on a platform where sizeof(long) == 8, then it
will *evolve* to take advantage of this. Then, if I save it's state, this
state will either be invalid or will get truncated into 32-bit ints if I
then reload this state on a 32-bit platform where sizeof(long) == 4. I
am trying to make this program *fast*, and so I want to avoid having to
waste cycles doing meaningless "& 0xffffffff" operations all over the
place or other nasty hacks to get around this type madness.

The nasty solution that I am trying to avoid is a typedefs.h type file
that defines a bunch of architecture-dependent typedefs. Yuck. Did I
mention that this C++ code is going to be compiled into a library that
will then get linked with other applications? I don't want the other
applications to have to compile with -DARCH_WHATEVER to get this library
to link correctly with it's typedefs.h file nastiness.

So I'd like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?


Well you can do many things. For example to save its state converting
the values to string representations. Or some other tricks. In most
32-bit platforms however int is 32-bit so you can use this type on that
assumption.
Since you are looking for saved-data portability, you will *have to*
save in text mode always, since in one implementation the binary
representation of a type can be different from that of another
implementation.
And place the various checks on loading then data, that the stored
values do not exceed the limits of the ranges supported in the current
platform (or place the checks when you save the data that the data do
not exceed the 32-bit limitation).
Or create a class. In an case for data-portability, the data must be
stored in text mode.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #2
Ioannis Vranos wrote:
Adam Ierymenko wrote:
....


Well you can do many things. For example to save its state converting
the values to string representations. Or some other tricks. In most
32-bit platforms however int is 32-bit so you can use this type on that
assumption.
Since you are looking for saved-data portability, you will *have to*
save in text mode always, since in one implementation the binary
representation of a type can be different from that of another
implementation.
In practice, all you need is somthing that will convert to a "known"
format. ASCII numeric strings are a "known" format but it could be
terribly inefficient.

See:
http://groups.google.com/groups?hl=e...iani.ws&rnum=9

For an example of how to deal with endianness issues but could easily be
extended to deal with with other formats. However it's highly unlikely
that anyone will ever need anything different.


And place the various checks on loading then data, that the stored
values do not exceed the limits of the ranges supported in the current
platform (or place the checks when you save the data that the data do
not exceed the 32-bit limitation).
Or create a class. In an case for data-portability, the data must be
stored in text mode.

....
Jul 22 '05 #3
Gianni Mariani wrote:
In practice,

Or better: In theory.
all you need is somthing that will convert to a "known"
format. ASCII numeric strings are a "known" format but it could be
terribly inefficient.

See:
http://groups.google.com/groups?hl=e...iani.ws&rnum=9
For an example of how to deal with endianness issues but could easily be
extended to deal with with other formats. However it's highly unlikely
that anyone will ever need anything different.


Yes, however in most cases ASCII is sufficient.

However if in a particular case this is inefficient, one can use other
formats. For example, one can use a library to save his data in XML.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #4
Ioannis Vranos wrote:
....

Yes, however in most cases ASCII is sufficient.

However if in a particular case this is inefficient, one can use other
formats. For example, one can use a library to save his data in XML.


I don't see how you can make this jump (that xml is more efficient that
ascii).

Having writtent an xml parser (subset actually), I can say that it is
far from "efficient".
Jul 22 '05 #5
Gianni Mariani wrote:
I don't see how you can make this jump (that xml is more efficient that
ascii).

Having writtent an xml parser (subset actually), I can say that it is
far from "efficient".

I was talking about using third party libraries for that. If you mean
that the use of such libraries is not efficient, well there are
libraries that there are. For example .NET provides XML facilities that
make it easy to write your data in XML.

I am sure that there are others good ones too.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #6
Gianni Mariani wrote:
I don't see how you can make this jump (that xml is more efficient that
ascii).

Having writtent an xml parser (subset actually), I can say that it is
far from "efficient".


And anything depends on what you mean by "efficient" in your use of the
term.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #7
Ioannis Vranos wrote:

I was talking about using third party libraries for that. If you mean
that the use of such libraries is not efficient, well there are
libraries that there are. For example .NET provides XML facilities that
make it easy to write your data in XML.


Isn't XML a text based scheme?

In the practical world I code in we have this data structure
that holds about 200Mb of floating point numbers. I wonder
what size of XML file that would create.

Jul 22 '05 #8
lilburne wrote:
Isn't XML a text based scheme?

In the practical world I code in we have this data structure that holds
about 200Mb of floating point numbers. I wonder what size of XML file
that would create.


Always the implementation decisions we make, depend on our needs and our
constraints. :-) For example you can use compression.
Anyway this discussion is not practical any more. There is not a
general, silver bullet solution for everything.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #9
Ioannis Vranos wrote:
Gianni Mariani wrote:
I don't see how you can make this jump (that xml is more efficient
that ascii).

Having writtent an xml parser (subset actually), I can say that it is
far from "efficient".


I was talking about using third party libraries for that. If you mean
that the use of such libraries is not efficient, well there are
libraries that there are. For example .NET provides XML facilities that
make it easy to write your data in XML.

I am sure that there are others good ones too.

I don't think we are talking about the same thing.

Reading/writing XML is relatively computationally expensive compared to
reading/writing ascii which in turn is much more computationally
expensive that reading/writing binary data.

If you find contrary examples, I suggest you look more closely for
poorly written or structured code (if performance is important to you
that is !).
Jul 22 '05 #10
Gianni Mariani wrote:
I don't think we are talking about the same thing.

Yes, that's what I said in another message. The discussion is too general.
Reading/writing XML is relatively computationally expensive compared to
reading/writing ascii which in turn is much more computationally
expensive that reading/writing binary data.

Yes we agree on that.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #11
Ian
Adam Ierymenko wrote:

So I'd like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?


Be explicit, use int32_t or uint32_t.

Doesn't help with endian issues, the other postings on this thread cover
that, but inside the code these fixed size types are your friend.

Ian
Jul 22 '05 #12
Ian wrote:
Be explicit, use int32_t or uint32_t.

However these are not part of the current C++ standard.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #13
On Sun, 15 Aug 2004 01:17:06 +0300, Ioannis Vranos wrote:
Well you can do many things. For example to save its state converting
the values to string representations. Or some other tricks. In most
32-bit platforms however int is 32-bit so you can use this type on that
assumption.
Since you are looking for saved-data portability, you will *have to*
save in text mode always, since in one implementation the binary
representation of a type can be different from that of another
implementation.
And place the various checks on loading then data, that the stored
values do not exceed the limits of the ranges supported in the current
platform (or place the checks when you save the data that the data do
not exceed the 32-bit limitation).
Or create a class. In an case for data-portability, the data must be
stored in text mode.


This doesn't solve the problem. If the 64-bit platform wrote 64-bit
ASCII values, they would still get truncated when they were loaded
into 32-bit longs on the 32-bit platform.

I think there's no choice but to use uint32_t types, despite the fact
that they are not yet standard on C++ yet. I could always define them
on platforms that didn't support them, but I know that Linux does and
that's the development platform.

Jul 22 '05 #14
Adam Ierymenko wrote:
This doesn't solve the problem. If the 64-bit platform wrote 64-bit
ASCII values, they would still get truncated when they were loaded
into 32-bit longs on the 32-bit platform.

I think there's no choice but to use uint32_t types, despite the fact
that they are not yet standard on C++ yet. I could always define them
on platforms that didn't support them, but I know that Linux does and
that's the development platform.


Yes using a typedef is a good choice, and in most cases that could be an
int.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 22 '05 #15
Ian
Ioannis Vranos wrote:
Ian wrote:
Be explicit, use int32_t or uint32_t.


However these are not part of the current C++ standard.


OK, C99, but just about every current compiler has them. If not, use
them as a guide and roll your own.

Ian
Jul 22 '05 #16

"Ioannis Vranos" <iv*@guesswh.at.grad.com> skrev i en meddelelse
news:cf**********@ulysses.noc.ntua.gr...
Gianni Mariani wrote:
I don't see how you can make this jump (that xml is more efficient that
ascii).

Having writtent an xml parser (subset actually), I can say that it is
far from "efficient".

I was talking about using third party libraries for that. If you mean
that the use of such libraries is not efficient, well there are
libraries that there are. For example .NET provides XML facilities that
make it easy to write your data in XML.


Well... it requires just a quick glance at the XML specification to realize
that whatever virtues that standard might have, speed is not one of them.
I am sure that there are others good ones too.
Probably. But there is no way to write an XML-parser that is faster than a
simple conversion to ascii.

Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys

/Peter
Jul 22 '05 #17

"Adam Ierymenko" <ap*@n0junkmal1l.clearlight.com> wrote in message
news:pa****************************@n0junkmal1l.cl earlight.com...
I have a question that might have been asked before, but I have not been
able to find anything via groups.google.com or any web search that is
definative.

I am writing an evolutionary AI application in C++. I want to use 32-bit
integers, and I want the application to be able to save it's state in a
portable fashion.

The obvious choice would be to use the "long" type, as it is deined to be
at least 32 bits in size. However, the definition says "at least" and I
believe that on some platforms (gcc/AMD64???) long ends up being a 64-bit
integer instead. In other words, there are cases where sizeof(long) ==
sizeof(long long) if I'm not mistaken.

The problem is this: it's an evolutionary AI application, so if I use long
as the data type and run it on a platform where sizeof(long) == 8, then it
will *evolve* to take advantage of this. Then, if I save it's state, this
state will either be invalid or will get truncated into 32-bit ints if I
then reload this state on a 32-bit platform where sizeof(long) == 4. I
am trying to make this program *fast*, and so I want to avoid having to
waste cycles doing meaningless "& 0xffffffff" operations all over the
place or other nasty hacks to get around this type madness.

The nasty solution that I am trying to avoid is a typedefs.h type file
that defines a bunch of architecture-dependent typedefs. Yuck. Did I
mention that this C++ code is going to be compiled into a library that
will then get linked with other applications? I don't want the other
applications to have to compile with -DARCH_WHATEVER to get this library
to link correctly with it's typedefs.h file nastiness.

So I'd like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?


Typedefs might be useful. But what you really want is to define an external
data representation which is fixed on every playform. You could follow
either XML or ASN.1 to do this. Or cook up your own data format.
Jul 22 '05 #18

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: Num | last post by:
Hi all, I have to convert a J2EE date as a long ("Millis") in a .NET date as a long ("Ticks") In Java, currentTimeMillis, is the difference, measured in milliseconds, between the current time...
5
by: Piotr B. | last post by:
Hello, I use MingGW g++ 3.2.3 on Windows 2000/AMD Athlon XP. I tried to output a "long double" variable using stdio printf(). I've tried various %formats (%llf, %Lf etc.), but none of them...
6
by: Otto Wyss | last post by:
I've the following function declaration: wxTree GetLastChild (const wxTree& item, long& cookie) const; I'd like to make the cookie parameter optional, i.e. "long& cookie = ....", without...
15
by: yim | last post by:
Hi all, I got the error message "Arg list too long" when linking a lot of object files. Does anyone know how to resolve this problem ? Thanks for help, Sakun
22
by: bq | last post by:
Hello, Two questions related to floating point support: What C compilers for the wintel (MS Windows + x86) platform are C99 compliant as far as <math.h> and <tgmath.h> are concerned? What...
16
by: ondekoza | last post by:
Hello, I need to convert the string "FFFFFFFF" to a long. To convert this string I tried the following: >>> 0xffffffff -1 >>> 0xffffffffL 4294967295L OK, this is what I want, so I tried
12
by: Zero | last post by:
Hi everybody, i want to write a small program, which shows me the biggest and smallest number in dependance of the data type. For int the command could be: ...
0
by: Curious | last post by:
Hi, I have two columns defined as DateTime type in the Visual Designer, i.e., Dataset.xsd. In the grid to which the columns are bound, they're both displayed as date, for instance, 5/23/2007....
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.