i want to detect if the compile is 32 bits or 64 bits in the source
code itself. so different code are compiled respectively. how to do
this? 15 4578
steve yee wrote:
i want to detect if the compile is 32 bits or 64 bits in the source
code itself. so different code are compiled respectively. how to do
this?
There is no portable way.
There *are* portable ways to check the ranges of arithmetic
types at compile time, if that's what your "different code" needs.
Use the macros defined in the <limits.hheader.
--
Eric Sosman es*****@acm-dot-org.invalid
steve yee posted:
i want to detect if the compile is 32 bits or 64 bits in the source
code itself. so different code are compiled respectively. how to do
this?
You shouldn't need to if you're writing portable code.
Nonetheless, you could try something like:
#include <limits.h>
/* Number of bits in inttype_MAX, or in any (1<<k)-1 where 0 <= k < 3.2E+10
*/
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *
30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))
#if IMAX_BITS(UINT_MAX) == 32
#define BIT32
#else
#if IMAX_BITS(UINT_MAX) == 64
#define BIT64
#else
#error "System must be either 32-Bit or 64-Bit."
#endif
#endif
int main(void)
{
/* Here's my non-portable code */
}
--
Frederick Gotham
steve yee wrote:
i want to detect if the compile is 32 bits or 64 bits in the source
code itself. so different code are compiled respectively. how to do
this?
You invite bugs, as well as taking the question off the topic of
standard C, if you write source code which has to change. The usual
question is about (sizeof)<various pointer types>. If your question is
simply about (sizeof)int, you create problems by assuming it is
determined by whether you have a 32- or 64-bit platform. If you insist
on unions of pointers and ints, knowing whether it is 32 or 64 bits
won't save you. More so, if you are one of those who writes code which
depends on (sizeof)(size_t x).
Frederick Gotham wrote:
steve yee posted:
i want to detect if the compile is 32 bits or 64 bits in the source
code itself. so different code are compiled respectively. how to do
this?
You shouldn't need to if you're writing portable code.
Nonetheless, you could try something like:
#include <limits.h>
/* Number of bits in inttype_MAX, or in any (1<<k)-1 where 0 <= k < 3.2E+10
*/
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *
30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))
#if IMAX_BITS(UINT_MAX) == 32
#if UINT_MAX == 0xFFFFFFFF
#define BIT32
#else
#if IMAX_BITS(UINT_MAX) == 64
#elif UINT_MAX == 0xFFFFFFFFFFFFFFFF
#define BIT64
#else
#error "System must be either 32-Bit or 64-Bit."
#endif
#endif
In article <0f*****************@newssvr25.news.prodigy.net> ,
Tim Prince <tp*****@nospammyrealbox.comwrote:
>steve yee wrote:
>i want to detect if the compile is 32 bits or 64 bits in the source code itself. so different code are compiled respectively. how to do this?
>You invite bugs, as well as taking the question off the topic of standard C, if you write source code which has to change.
Not necessarily.
>The usual question is about (sizeof)<various pointer types>. If your question is simply about (sizeof)int, you create problems by assuming it is determined by whether you have a 32- or 64-bit platform. If you insist on unions of pointers and ints, knowing whether it is 32 or 64 bits won't save you. More so, if you are one of those who writes code which depends on (sizeof)(size_t x).
You appear to have read a fair bit into the poster's question
that I don't think is justified by what the poster wrote.
Suppose I have an algorithm, such as a cryptography algorithm, that
operates on chunks of bits at a time. The algorithm is the same
(except perhaps for a few constants) whether I'm computing with 32 or
64 bits, but the chunk size differs for the two cases. In such a case,
I -could- write the code using only the minimum guaranteed size,
but on most platforms it would be noticably more efficient to use
the larger chunk size *if the platform supports it*. The constants
for the algorithm could probably be computed at run-time and a
generic algorithm used, but in the real world, having the constants
available at compile time increases compiler optimization opportunities
leading to faster code.
In C, it is not an error to write int x = 40000; for use on
platforms that have an int of at least 17 bits. It is not maximally
portable, but it is not an error -- and the OP was asking for a
a compile-time method of selecting such code for platforms that allow it,
dropping back to smaller value assumptions when that is all the
platform supports.
--
Okay, buzzwords only. Two syllables, tops. -- Laurie Anderson
Walter Roberson wrote:
In article <0f*****************@newssvr25.news.prodigy.net> ,
Tim Prince <tp*****@nospammyrealbox.comwrote:
>steve yee wrote:
>>i want to detect if the compile is 32 bits or 64 bits in the source code itself. so different code are compiled respectively. how to do this?
>You invite bugs, as well as taking the question off the topic of standard C, if you write source code which has to change.
Not necessarily.
>The usual question is about (sizeof)<various pointer types>. If your question is simply about (sizeof)int, you create problems by assuming it is determined by whether you have a 32- or 64-bit platform. If you insist on unions of pointers and ints, knowing whether it is 32 or 64 bits won't save you. More so, if you are one of those who writes code which depends on (sizeof)(size_t x).
You appear to have read a fair bit into the poster's question
that I don't think is justified by what the poster wrote.
OP didn't explain what the reason for non-portable code might be.
Certainly, there wasn't anything about finding the largest efficient
data types.
>
Suppose I have an algorithm, such as a cryptography algorithm, that
operates on chunks of bits at a time. The algorithm is the same
(except perhaps for a few constants) whether I'm computing with 32 or
64 bits, but the chunk size differs for the two cases. In such a case,
I -could- write the code using only the minimum guaranteed size,
but on most platforms it would be noticably more efficient to use
the larger chunk size *if the platform supports it*. The constants
for the algorithm could probably be computed at run-time and a
generic algorithm used, but in the real world, having the constants
available at compile time increases compiler optimization opportunities
leading to faster code.
So, your interpretation of 32- bit vs 64-bit is based on whether there
is an efficient implementation of 64-bit int. It's certainly likely,
but not assured, that a compiler would choose (sizeof)int in accordance
with efficiency. For example, most Windows compilers have 32-bit int
and long even on platforms which have efficient 64- and (limited)
128-bit instructions, where linux for the same platform supports 64-bit int.
If this is your goal, you need a configure script which checks existence
of various candidate data types, which don't necessarily appear in
<stdint.h>, and tests their efficiency.
"steve yee" <yi******@gmail.comwrites:
i want to detect if the compile is 32 bits or 64 bits in the source
code itself. so different code are compiled respectively. how to do
this?
The first step is to define exactly what you mean when you say "the
compile is 32 bits or 64 bits". There is no universal definition for
these terms.
--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Walter Roberson wrote:
In article <0f*****************@newssvr25.news.prodigy.net> ,
Tim Prince <tp*****@nospammyrealbox.comwrote:
steve yee wrote:
i want to detect if the compile is 32 bits or 64 bits in the source
code itself. so different code are compiled respectively. how to do
this?
You invite bugs, as well as taking the question off the topic of
standard C, if you write source code which has to change.
Not necessarily.
The usual
question is about (sizeof)<various pointer types>. If your question is
simply about (sizeof)int, you create problems by assuming it is
determined by whether you have a 32- or 64-bit platform. If you insist
on unions of pointers and ints, knowing whether it is 32 or 64 bits
won't save you. More so, if you are one of those who writes code which
depends on (sizeof)(size_t x).
You appear to have read a fair bit into the poster's question
that I don't think is justified by what the poster wrote.
Suppose I have an algorithm, such as a cryptography algorithm, that
operates on chunks of bits at a time. The algorithm is the same
(except perhaps for a few constants) whether I'm computing with 32 or
64 bits, but the chunk size differs for the two cases. In such a case,
I -could- write the code using only the minimum guaranteed size,
but on most platforms it would be noticably more efficient to use
the larger chunk size *if the platform supports it*. The constants
for the algorithm could probably be computed at run-time and a
generic algorithm used, but in the real world, having the constants
available at compile time increases compiler optimization opportunities
leading to faster code.
Indeed. How does knowing "if the compile is 32 bits or 64 bits" help
with this, though? What you're interested in is how big the available
types are.
In C, it is not an error to write int x = 40000; for use on
platforms that have an int of at least 17 bits. It is not maximally
portable, but it is not an error -- and the OP was asking for a
a compile-time method of selecting such code for platforms that allow it,
dropping back to smaller value assumptions when that is all the
platform supports.
If I'm building for my 17-bit-int machine, how is code which tells me
if the compile is 32 bit or 64 bit going to help, especially since the
answer is "no"? I need to know if ints have at least 17 value bits.
Knowing "if the compile is 32 bits or 64 bits" requires a lot of
subsequent non-portable assumptions for the information to be of any
use. Much better to check for the things you need to know rather than
making assumptions which will change from compiler to compiler. What
does the compile being 32 or 64 bit tell me about the size of a long,
for example?
Keith Thompson wrote:
"steve yee" <yi******@gmail.comwrites:
>>i want to detect if the compile is 32 bits or 64 bits in the source code itself. so different code are compiled respectively. how to do this?
The first step is to define exactly what you mean when you say "the
compile is 32 bits or 64 bits". There is no universal definition for
these terms.
Very true, that's why tools like GNU autoconf exist.
Information regarding platform capabilities belongs in a configureation
file.
--
Ian Collins.
J. J. Farrell wrote:
Walter Roberson wrote:
In article <0f*****************@newssvr25.news.prodigy.net> ,
Tim Prince <tp*****@nospammyrealbox.comwrote:
>steve yee wrote:
>i want to detect if the compile is 32 bits or 64 bits in the source
>code itself. so different code are compiled respectively. how to do
>this?
>You invite bugs, as well as taking the question off the topic of
>standard C, if you write source code which has to change.
Not necessarily.
>The usual
>question is about (sizeof)<various pointer types>. If your question is
>simply about (sizeof)int, you create problems by assuming it is
>determined by whether you have a 32- or 64-bit platform. If you insist
>on unions of pointers and ints, knowing whether it is 32 or 64 bits
>won't save you. More so, if you are one of those who writes code which
>depends on (sizeof)(size_t x).
You appear to have read a fair bit into the poster's question
that I don't think is justified by what the poster wrote.
Suppose I have an algorithm, such as a cryptography algorithm, that
operates on chunks of bits at a time. The algorithm is the same
(except perhaps for a few constants) whether I'm computing with 32 or
64 bits, but the chunk size differs for the two cases. In such a case,
I -could- write the code using only the minimum guaranteed size,
but on most platforms it would be noticably more efficient to use
the larger chunk size *if the platform supports it*. The constants
for the algorithm could probably be computed at run-time and a
generic algorithm used, but in the real world, having the constants
available at compile time increases compiler optimization opportunities
leading to faster code.
Indeed. How does knowing "if the compile is 32 bits or 64 bits" help
with this, though? What you're interested in is how big the available
types are.
In C, it is not an error to write int x = 40000; for use on
platforms that have an int of at least 17 bits. It is not maximally
portable, but it is not an error -- and the OP was asking for a
a compile-time method of selecting such code for platforms that allow it,
dropping back to smaller value assumptions when that is all the
platform supports.
If I'm building for my 17-bit-int machine, how is code which tells me
if the compile is 32 bit or 64 bit going to help, especially since the
answer is "no"? I need to know if ints have at least 17 value bits.
Knowing "if the compile is 32 bits or 64 bits" requires a lot of
subsequent non-portable assumptions for the information to be of any
use. Much better to check for the things you need to know rather than
making assumptions which will change from compiler to compiler. What
does the compile being 32 or 64 bit tell me about the size of a long,
for example?
in fact, what i want to know is the size of the native long type of the
c compiler, at compile time. for exampe, the following code
if (sizeof (long) == 8)
{
// handle 64 bits long type
}
else if (sizeof (long) == 4)
{
// handle 32 bits long type
}
but, you know, this is done at run time, not compile time. i would like
write code like this:
#if sizeof (long) == 8
{
// handle 64 bits long type
}
#elif sizeof (long) == 4
{
// handle 32 bits long type
}
#endif
but this simply doesn't work.
of course autoconf can help to achieve this. but i think the compile
should provide a way to determine the size of each native types that it
supports. in fact sizeof operator is compile time computing, but the
compile does not allow the sizeof operator to be used in #if
preprocesser statement. it should not have this limitation. so i wonder
if there's an alternative way to do this.
btw, on windown platform, autoconf is not generally avalible.
steve yee posted:
#if sizeof (long) == 8
{
// handle 64 bits long type
Only if:
(1) 8 == CHAR_BIT
(2) "long" contains no padding bits.
}
#elif sizeof (long) == 4
{
// handle 32 bits long type
Similarly again.
of course autoconf can help to achieve this. but i think the compile
should provide a way to determine the size of each native types that it
supports. in fact sizeof operator is compile time computing, but the
compile does not allow the sizeof operator to be used in #if
preprocesser statement. it should not have this limitation. so i wonder
if there's an alternative way to do this.
There's the macros defined in "limits.h".
Plus there's the macro entitled "IMAX_BITS" -- you'll find it if you do a
Google Groups search of comp.lang.c.
--
Frederick Gotham
steve yee wrote:
J. J. Farrell wrote:
Walter Roberson wrote:
In article <0f*****************@newssvr25.news.prodigy.net> ,
Tim Prince <tp*****@nospammyrealbox.comwrote:
steve yee wrote:
i want to detect if the compile is 32 bits or 64 bits in the source
code itself. so different code are compiled respectively. how to do
this?
>
You invite bugs, as well as taking the question off the topic of
standard C, if you write source code which has to change.
>
Not necessarily.
>
The usual
question is about (sizeof)<various pointer types>. If your question is
simply about (sizeof)int, you create problems by assuming it is
determined by whether you have a 32- or 64-bit platform. If you insist
on unions of pointers and ints, knowing whether it is 32 or 64 bits
won't save you. More so, if you are one of those who writes code which
depends on (sizeof)(size_t x).
>
You appear to have read a fair bit into the poster's question
that I don't think is justified by what the poster wrote.
>
Suppose I have an algorithm, such as a cryptography algorithm, that
operates on chunks of bits at a time. The algorithm is the same
(except perhaps for a few constants) whether I'm computing with 32 or
64 bits, but the chunk size differs for the two cases. In such a case,
I -could- write the code using only the minimum guaranteed size,
but on most platforms it would be noticably more efficient to use
the larger chunk size *if the platform supports it*. The constants
for the algorithm could probably be computed at run-time and a
generic algorithm used, but in the real world, having the constants
available at compile time increases compiler optimization opportunities
leading to faster code.
Indeed. How does knowing "if the compile is 32 bits or 64 bits" help
with this, though? What you're interested in is how big the available
types are.
In C, it is not an error to write int x = 40000; for use on
platforms that have an int of at least 17 bits. It is not maximally
portable, but it is not an error -- and the OP was asking for a
a compile-time method of selecting such code for platforms that allow it,
dropping back to smaller value assumptions when that is all the
platform supports.
If I'm building for my 17-bit-int machine, how is code which tells me
if the compile is 32 bit or 64 bit going to help, especially since the
answer is "no"? I need to know if ints have at least 17 value bits.
Knowing "if the compile is 32 bits or 64 bits" requires a lot of
subsequent non-portable assumptions for the information to be of any
use. Much better to check for the things you need to know rather than
making assumptions which will change from compiler to compiler. What
does the compile being 32 or 64 bit tell me about the size of a long,
for example?
in fact, what i want to know is the size of the native long type of the
c compiler, at compile time. for exampe, the following code
if (sizeof (long) == 8)
{
// handle 64 bits long type
}
else if (sizeof (long) == 4)
{
// handle 32 bits long type
}
but, you know, this is done at run time, not compile time.
In theory, it is done at run time. In practice, it is done at compile
time. It's a very easy optimisation for compilers when the expression
is constant.
steve yee wrote:
>
in fact, what i want to know is the size of the native long type of the
c compiler, at compile time.
It's very likely that including <limits.hand making appropriate use
of the LONG_MIN, LONG_MAX, and ULONG_MAX with #if will give you what
you need.
steve yee wrote:
i want to detect if the compile is 32 bits or 64 bits in the source
code itself. so different code are compiled respectively. how to do
this?
#include <limits.h>
#include <stdio.h>
int main(void)
{
#if UINT_MAX == 65535U
printf(" 16 bit int \n");
#elif UINT_MAX == 4294967295UL
#if ULONG_MAX == 4294967295UL
printf(" 32 bit int and long \n");
#elif ULONG_MAX == 18446744073709551615ULL
printf(" 32 bit int and 64 bit long \n");
#else
printf(" 32 bit int but neither 32 or 64 bit long \n");
#endif
#elif UINT_MAX == 18446744073709551615ULL
printf(" 64 bit int \n");
#else
printf(" neither 16, 32 or 64 bit int \n");
#endif
return 0;
}
steve yee wrote:
>
in fact, what i want to know is the size of the native long type of the
c compiler, at compile time. [...]
You can discover the bounds of the `long' type easily
enough by using the <limits.hmacros:
#include <limits.h>
#if LONG_MAX == 0x7fffffff
/* 32-bit long, the minimum size */
#elif (LONG_MAX >32) >= 0x7fffffff
/* 64 or more bits. Note that the `>>32' is
* not attempted until we've already established
* that `long' has >32 bits.
*/
#else
/* somewhere in 32 < bits < 64 */
#endif
You could, of course, use additional range tests to refine
the result further.
However, none of this will tell you whether the long
type is "native" in any useful sense. You cannot tell by
examining the range or size of `long' whether it is directly
supported by the hardware or emulated in software -- observe
that even an 8-bit CPU must support a `long' of at least 32
bits. Some machines even use a combination of hardware and
emulation: A system might add, subtract, and multiply `long'
values in hardware but use software for division.
At run time, you might use clock() to time a large number
of `int' and `long' calculations and try to guess from the
difference in CPU time whether there's software emulation
going on -- but that's going to be an iffy business at best
(note that the 8-bit CPU must use emulation even for `int').
You'd get a far better answer by reading the implementation's
documentation.
--
Eric Sosman es*****@acm-dot-org.invalid This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: dever |
last post by:
I checked my color printers's Printersettings.SupportsColor, which all
showed false. What is wrong that I could not use this property.
Also, what other way to detect if a printer is color...
|
by: Dave Rahardja |
last post by:
I have the following program that uses an array of chars to simulate a bit
set:
---------
// An out-of-bounds exception
class BoundsException {};
template <int bits = 1>
class Bitset
|
by: chrisstankevitz |
last post by:
This code does not compile on gcc 3.4.4. Should it?
Thanks for your help,
Chris
//================
#include <set>
int main()
|
by: yinglcs |
last post by:
Hi,
I have a function which calls stl sort(). I pass in a STL list of
'Rect' (my own class), like this:
void sortListY(const list<Rect>& rectList) {
sort(rectList.begin(),...
|
by: cranium.2003 |
last post by:
hi,
Here is my code
#include <iostream.h>
int main()
{
cout <<"HI";
return 0;
}
and using following command to compile a C++ program
g++ ex1.cpp -o ex1
|
by: Frederick Gotham |
last post by:
Over on comp.lang.c, Hallvard B Furuseth devised a compile-time constant
representing the amount of value representation bits in an unsigned
integer type. In true C fashion, here it is as a macro:...
|
by: rrs.matrix |
last post by:
hi
i have to detect the type of CPU.
whether it is 32-bit or 64-bit..
how can this be done..
can anyone please help me..
thanks.
|
by: Jonathan Wilson |
last post by:
Is there some way I can do something like this
#if <running on Visual C++ 2005 SP1>
//SP1 code here
#else
//other code here
#endif
|
by: Martin Wells |
last post by:
Anyone know a good compile-time constant to tell you how many bits are
needed to store a given value?
For instance:
CALC_BITS_NEEDED(0) == 1
CALC_BITS_NEEDED(1) == 1
CALC_BITS_NEEDED(2) ...
|
by: erikbower65 |
last post by:
Using CodiumAI's pr-agent is simple and powerful. Follow these steps:
1. Install CodiumAI CLI: Ensure Node.js is installed, then run 'npm install -g codiumai' in the terminal.
2. Connect to...
|
by: linyimin |
last post by:
Spring Startup Analyzer generates an interactive Spring application startup report that lets you understand what contributes to the application startup time and helps to optimize it. Support for...
|
by: erikbower65 |
last post by:
Here's a concise step-by-step guide for manually installing IntelliJ IDEA:
1. Download: Visit the official JetBrains website and download the IntelliJ IDEA Community or Ultimate edition based on...
|
by: kcodez |
last post by:
As a H5 game development enthusiast, I recently wrote a very interesting little game - Toy Claw ((http://claw.kjeek.com/))。Here I will summarize and share the development experience here, and hope it...
|
by: isladogs |
last post by:
The next Access Europe meeting will be on Wednesday 6 Sept 2023 starting at 18:00 UK time (6PM UTC+1) and finishing at about 19:15 (7.15PM)
The start time is equivalent to 19:00 (7PM) in Central...
|
by: lllomh |
last post by:
Define the method first
this.state = {
buttonBackgroundColor: 'green',
isBlinking: false, // A new status is added to identify whether the button is blinking or not
}
autoStart=()=>{
|
by: lllomh |
last post by:
How does React native implement an English player?
|
by: Mushico |
last post by:
How to calculate date of retirement from date of birth
|
by: DJRhino |
last post by:
Was curious if anyone else was having this same issue or not....
I was just Up/Down graded to windows 11 and now my access combo boxes are not acting right. With win 10 I could start typing...
| |