468,752 Members | 1,768 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,752 developers. It's quick & easy.

Difference between "float" and "GLfloat" ?

Anyone know why for openGL applications must be used GLfloat (and GLint,
etc...) instead float, int, etc..?

thx,

Manuel
Dec 31 '05 #1
8 8473
Manuel wrote:
Anyone know why for openGL applications must be used GLfloat (and GLint,
etc...) instead float, int, etc..?

thx,

Manuel


Portability. The GL Red Book says:

"Implementations of OpenGL have leeway in selecting which C data type to
use to represent OpenGL data types. If you resolutely use the OpenGL
defined data types throughout your application, you will avoid
mismatched types when porting your code between different implementations."
Dec 31 '05 #2
W Marsh wrote:
Portability. The GL Red Book says:

"Implementations of OpenGL have leeway in selecting which C data type to
use to represent OpenGL data types. If you resolutely use the OpenGL
defined data types throughout your application, you will avoid
mismatched types when porting your code between different implementations."

Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?
Regards,

Manuel
Dec 31 '05 #3
Manuel wrote:
W Marsh wrote:
Portability. The GL Red Book says:

"Implementations of OpenGL have leeway in selecting which C data type
to use to represent OpenGL data types. If you resolutely use the
OpenGL defined data types throughout your application, you will avoid
mismatched types when porting your code between different
implementations."


Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?
Regards,

Manuel


It may be. Even a char can have more than 8-bits.
Dec 31 '05 #4

Manuel wrote:
W Marsh wrote:
Portability. The GL Red Book says:

"Implementations of OpenGL have leeway in selecting which C data type to
use to represent OpenGL data types. If you resolutely use the OpenGL
defined data types throughout your application, you will avoid
mismatched types when porting your code between different implementations."

Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?


Yes. There is no requirement for an int to be the same size on every
platform.

sizeof returns a number of bytes. You are guaranteed that

sizeof char == 1
sizeof char <= sizeof short <= sizeof int <= sizeof long
CHAR_BIT >= 8

where CHAR_BIT, available by including the <climits> or <limits.h>
header, is the number of bits in a char (i.e. in a byte).

I believe there are also some *minimum* size guarantees for the
integral types. I'm not sure what they are, or whether they are
specified as a number of bits, a number of bytes, or a range of values
that must be accomodated.

Gavin Deane

Dec 31 '05 #5
Manuel wrote:
W Marsh wrote:
Portability. The GL Red Book says:

"Implementations of OpenGL have leeway in selecting which C data type
to use to represent OpenGL data types. If you resolutely use the
OpenGL defined data types throughout your application, you will avoid
mismatched types when porting your code between different
implementations."

Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?


Yes. This is why there are typedefs in the various API's that use C
and C++ to nail this down on a particular implementation. Even in
the standard language we have things like size_t.

The later version of the C standard even includes some "predefined"
typedefs that assign traits to various integer types for example.
Dec 31 '05 #6
On 31 Dec 2005 06:37:11 -0800, "Gavin Deane" <de*********@hotmail.com>
wrote in comp.lang.c++:

Manuel wrote:
W Marsh wrote:
Portability. The GL Red Book says:

"Implementations of OpenGL have leeway in selecting which C data type to
use to represent OpenGL data types. If you resolutely use the OpenGL
defined data types throughout your application, you will avoid
mismatched types when porting your code between different implementations."

Thanks.
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int" under
linux or OSX ?


Yes. There is no requirement for an int to be the same size on every
platform.

sizeof returns a number of bytes. You are guaranteed that

sizeof char == 1
sizeof char <= sizeof short <= sizeof int <= sizeof long
CHAR_BIT >= 8

where CHAR_BIT, available by including the <climits> or <limits.h>
header, is the number of bits in a char (i.e. in a byte).

I believe there are also some *minimum* size guarantees for the
integral types. I'm not sure what they are, or whether they are
specified as a number of bits, a number of bytes, or a range of values
that must be accomodated.


They are specified by a range of values, but if look at the binary
representation of the values you can easily work out the minimum
number of bits, although the actual bit usage may be greater because
padding bits are allowed in all but the "char" types.

You can see the ranges for all the integer types, including the C
"long long" type that is not part of C++, yet, here:

http://www.jk-technology.com/c/inttypes.html#limits

You can easily work out the required minimum number of bits from the
required ranges:

char types, at least 8 bits
short int, at least 16 bits
int, at least 16 bits
long, at least 32 bits
long long (C since 1999, not official in C++) 64 bits

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Jan 1 '06 #7
Jack Klein wrote:
You can see the ranges for all the integer types, including the C
"long long" type that is not part of C++, yet, here:

http://www.jk-technology.com/c/inttypes.html#limits

You can easily work out the required minimum number of bits from the
required ranges:

char types, at least 8 bits
short int, at least 16 bits
int, at least 16 bits
long, at least 32 bits
long long (C since 1999, not official in C++) 64 bits


Thanks Jack. A useful reference.

Gavin Deane

Jan 1 '06 #8
Ron Natalie wrote:
But this is a problem with C++ too?
If I declare an "int" under windows, maybe different from an "int"
under linux or OSX ?

Yes. This is why there are typedefs in the various API's that use C
and C++ to nail this down on a particular implementation. Even in
the standard language we have things like size_t.


Thanks.
But I've some difficult yo understand the problem.

Maybe a loop go "out of size"?
The problem is not limited to openGL, but it is about all multiplatform
application?
If yes...how solve the problem without using GLtypes?

Can you show an example where using various "int types" can crash or
ruin the application?

Thanks!

Jan 2 '06 #9

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

21 posts views Thread by b83503104 | last post: by
26 posts views Thread by Frank | last post: by
3 posts views Thread by bbawa1 | last post: by
12 posts views Thread by Petronius | last post: by
5 posts views Thread by Julius | last post: by
11 posts views Thread by cmb3587 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.