473,700 Members | 2,851 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

32 or 64 bit processor info in C

Hello,

Is there a way in C to get information at runtime if a processor is 32
or 64 bit?

Cheers,

Broeisi

Apr 10 '07
168 7233
Malcolm McLean wrote:
>>
size_t is a change. It is not present in K and R C, and rightly so.
Ah, so you are objecting to C89 now. It's a little late for that.
Another change is not having int as the natural integer type for the
platform.
But 32 bit int is a natural integer type for most contemporary 64 bit
systems.
Whilst there is an argument for having fixed-size types in a
language, C chose the other route. We are now seeing a reversal of
policy so that existing 32-bit code won't break under 64 bits. That is,
badly-written 32-bit code. If it was written properly it wouldn't assume
32 bit ints and longs.
So you are labelling the BSD networking code as bad? The assumption
that long is 32 bits is pretty widespread there.

When ever C hits the real world, developers have added fixed size types.
Now they are standardised. Good.

--
Ian Collins.
Apr 20 '07 #131
"Malcolm McLean" <re*******@btin ternet.comwrite s:
"Richard Bos" <rl*@hoekstra-uitgeverij.nlwr ote in message
news:46******** *********@news. xs4all.nl...
>"Malcolm McLean" <re*******@btin ternet.comwrote :
>>The answer is an ssize_t. Which for efficiency reasons is allowed
to fail if
the difference in size_t's spans more than half the address space.

Oh FFS, use ptrdiff_t already! Have you even _read_ the Standard, or are
you just talking out of blithe ignorance? ssize_t is completely
unnecessary, and if you knew C well, you'd know that.
In the example we have two calls to functions yielding size_ts. We
then want the difference, which can be negative. However ptrdiff_t is
not appropriate since the operands are not pointers. ssize_t is
intended for the purpose.
ssize_t is not part of the C standard (it's defined by POSIX). I
wouldn't mind if it were added to standard C. ssize_t, where it
exists, is a signed type the same size as the unsigned type size_t.
ptrdiff_t, though it's not defined that way, is very likely to be a
signed type the same size as size_t, so using it to store the
difference between two size_t values (where the result may be negative
and is unlikely to overflow) is not unreasonable.

If you want to be sure you can avoid overflow (or as sure as it's
possible to be), you can use intmax_t if it's available, or long if
you want to maintain C90 compatibility.
>>The point is that once we admit a fundamental, if subtle, change into
the language,

No, the point is that having different size integers, and size_t for
indices, is _not_ a change. That's how C has been since at least 1989.
size_t is a change. It is not present in K and R C, and rightly
so.
A lot of things aren't in K&R C. Time has passed. size_t was
introduced in the 1989 ANSI standard; the time to complain about it
was about 20 years ago.
Another change is not having int as the natural integer type for
the platform.
That has always been an unenforceable requirement. There is no clear
definition for the phrase "natural integer type for the platform" (or
as the C99 standard says, "the natural size suggested by the
architecture of the execution environment"). And if there were a
clear definition, I seriously doubt that it would involve the size of
the address bus.
Whilst there is an argument for having fixed-size types
in a language, C chose the other route. We are now seeing a reversal
of policy so that existing 32-bit code won't break under 64 bits. That
is, badly-written 32-bit code. If it was written properly it wouldn't
assume 32 bit ints and longs.
The same thing happened during the transition from 16-bit to 32-bit
architectures; there were implementations for 32-bit machines that had
16-bit ints. Somehow we muddled through.
I am arguing for keeping the existing policy that has served C well
for so long. size_t was tolerable as long as no programmer worth his
salt used it. The danger is that the move to 64 bits will encourage or
even force its widespread use.
You are not arguing for leaving C as it is; you are arguing for
changing it to what you want it to be. If you don't like C the way
it's actually defined, that's fine. Arguing that the way you want it
to be is the way it actually is doesn't help your case.

--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Apr 20 '07 #132
Malcolm McLean wrote On 04/20/07 17:58,:
"Richard Bos" <rl*@hoekstra-uitgeverij.nlwr ote
>>[...]

No, the point is that having different size integers, and size_t for
indices, is _not_ a change. That's how C has been since at least 1989.

size_t is a change. It is not present in K and R C, and rightly so. Another
K&R C also lacked void and void*, long double,
function prototypes, <stdarg.hand variadic functions,
<stdlib.h>, <stddef.h>, offsetof, <limits.h>, ... The
introductions of *all* these things and more were changes.
change is not having int as the natural integer type for the platform.
You keep asserting this, but it was never so. int was
always a "useful" size that "made sense" for the platform.
I've used eight-bit machines with C implementations , but
never have I seen an 8-bit int.
Whilst there is an argument for having fixed-size types in a language, C
chose the other route. We are now seeing a reversal of policy so that
existing 32-bit code won't break under 64 bits. That is, badly-written
32-bit code. If it was written properly it wouldn't assume 32 bit ints and
longs.
Part of the "Spirit of C" is (I'm not making this up;
it's in the Rationale) is "Trust the programmer." It turns
out that some programmers are untrustworthy, which is too
bad -- but the cure isn't to dumb down C, but to smarten up
the bad programmers.

And do you imagine that expanding int to 64 or 128 or
1024 bits will magically fix all that bad code? Then it's
obvious you've had NO experience in porting code to an
environment where the data type sizes change. If you had
any such experience, you wouldn't believe such folderol.
I am arguing for keeping the existing policy that has served C well for so
long.
I am arguing for keeping the existing policy that assigns
colors to all C's fundamental data types. ("But there is no
such policy, and never has been!") Right you are, Jocko,
right you are.
size_t was tolerable as long as no programmer worth his salt used it.
The danger is that the move to 64 bits will encourage or even force its
widespread use.
"It's a poor workman who blames his tools."

--
Er*********@sun .com
Apr 20 '07 #133
Ian Collins <ia******@hotma il.comwrites:
Malcolm McLean wrote:
[...]
>Whilst there is an argument for having fixed-size types in a
language, C chose the other route. We are now seeing a reversal of
policy so that existing 32-bit code won't break under 64 bits. That is,
badly-written 32-bit code. If it was written properly it wouldn't assume
32 bit ints and longs.

So you are labelling the BSD networking code as bad? The assumption
that long is 32 bits is pretty widespread there.
Is it? If so, I'd say that's a bad assumption. I'd be very surprised
if that code isn't already running on systems with 64-bit longs.
When ever C hits the real world, developers have added fixed size types.
Now they are standardised. Good.
Agreed. Though I suspect fixed size types tend to be overused. In
many cases, what you really need is a type able to represent (at
least) some specified range of values. C99's <stdint.h>,
unfortunately IMHO, gave the easy-to-remember names to the exact-width
types rather than the potentially more useful least-width types.

For example:

int_least32_t is the narrowest signed type of at least 32 bits.

int_fast32_t is the "fastest" signed type of at least 32 bits.

int32_t is *exactly* 32 bits wide; if there is no such type,
int32_t doesn't exist.

On a 64-bit CPU, for example, int_least32_t might be 32 bits and
int_fast32_t might be 64 bits.

I think that int32_t, the exact-width type, is really needed only for
conformance to some externally imposed data layout. For internal
numeric calculations, int_fast32_t or int_least32_t is more
appropriate, depending on whether you need large arrays. But since
the name "int32_t" is more obvious, a lot of programmers are going to
use it unnecessarily.

That's probably not *too* bad. If int32_t exists, it's going to be
the same as int_least32_t, and if it doesn't, the code will fail to
compile in a rather obvious manner. (But then the programmer may be
tempted to define his own "int32_t" that's actually 64 bits.)

IMHO, it would have been better to define:

int_least32_t
int_fast32_t
int_exact32_t

or perhaps even to introduce a new syntax for declaring such types
rather than using ad hoc typedefs.

--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Apr 20 '07 #134
Keith Thompson wrote:
Ian Collins <ia******@hotma il.comwrites:
>>Malcolm McLean wrote:

[...]
>>>Whilst there is an argument for having fixed-size types in a
language, C chose the other route. We are now seeing a reversal of
policy so that existing 32-bit code won't break under 64 bits. That is,
badly-written 32-bit code. If it was written properly it wouldn't assume
32 bit ints and longs.

So you are labelling the BSD networking code as bad? The assumption
that long is 32 bits is pretty widespread there.

Is it? If so, I'd say that's a bad assumption. I'd be very surprised
if that code isn't already running on systems with 64-bit longs.
Yes, it has been for a long time, I was referring to the earlier code
where unsigned long is used extensively for 32 bit IP addresses. Even
some for the function names are a give away, ntohl/htonl for instance.
>
>>When ever C hits the real world, developers have added fixed size types.
Now they are standardised. Good.


Agreed. Though I suspect fixed size types tend to be overused. In
many cases, what you really need is a type able to represent (at
least) some specified range of values. C99's <stdint.h>,
unfortunately IMHO, gave the easy-to-remember names to the exact-width
types rather than the potentially more useful least-width types.
Agreed.

--
Ian Collins.
Apr 20 '07 #135

"Eric Sosman" <Er*********@su n.comwrote in message
news:1177108464 .140009@news1nw k...
Malcolm McLean wrote On 04/20/07 17:58,:
>change is not having int as the natural integer type for the platform.

You keep asserting this, but it was never so. int was
always a "useful" size that "made sense" for the platform.
I've used eight-bit machines with C implementations , but
never have I seen an 8-bit int.
Your machine had an eight bit address bus?
There is a bit of wooliness in the "natural integer type". However generally
it is the size needed to index the largest possible array, which generally
boils down to the size of the address bus, but not on old x86s because of
the weirdness of the architecture.
On a machine with 64 bits of flat address space and a 64 bit data register
file the natural integer size is quite clearly 64 bits. At the moment there
might be a performace case for using 32-bit integers to conserve cache
space. I've not said that all the good arguments are on my side. However I
suspect that this issue will prove to be quite short-lived.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Apr 20 '07 #136
Malcolm McLean wrote:
>
"Eric Sosman" <Er*********@su n.comwrote in message
news:1177108464 .140009@news1nw k...
>Malcolm McLean wrote On 04/20/07 17:58,:
>>change is not having int as the natural integer type for the platform.

You keep asserting this, but it was never so. int was
always a "useful" size that "made sense" for the platform.
I've used eight-bit machines with C implementations , but
never have I seen an 8-bit int.
Your machine had an eight bit address bus?
There is a bit of wooliness in the "natural integer type". However
generally it is the size needed to index the largest possible array,
which generally boils down to the size of the address bus, but not on
old x86s because of the weirdness of the architecture.
No, it can't do that, int is a signed type, so it can never index the
largest possible array.
On a machine with 64 bits of flat address space and a 64 bit data
register file the natural integer size is quite clearly 64 bits. At the
moment there might be a performace case for using 32-bit integers to
conserve cache space. I've not said that all the good arguments are on
my side. However I suspect that this issue will prove to be quite
short-lived.
Considering 64 bit systems have been in widespread use for over a
decade, short is a relative term.

--
Ian Collins.
Apr 21 '07 #137
Ian Collins said:
Malcolm McLean wrote:
<snip>
>Whilst there is an argument for having fixed-size types in a
language, C chose the other route. We are now seeing a reversal of
policy so that existing 32-bit code won't break under 64 bits. That
is, badly-written 32-bit code. If it was written properly it wouldn't
assume 32 bit ints and longs.

So you are labelling the BSD networking code as bad? The assumption
that long is 32 bits is pretty widespread there.
To be diplomatic, let us just say that the BSD networking code would
have been *even better* if it didn't make invalid assumptions about
integer type sizes.
When ever C hits the real world, developers have added fixed size
types.
Now they are standardised. Good.
Whenever language designers specify fixed size types, they have to come
back later and specify some more. Bad.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Apr 21 '07 #138
Richard Heathfield wrote:
Ian Collins said:
>>When ever C hits the real world, developers have added fixed size
types.
Now they are standardised. Good.

Whenever language designers specify fixed size types, they have to come
back later and specify some more. Bad.
If don't specify the type, everyone else who wants it has to add their
own, so we ended up with typedefs like U8, INT16 and just about every
other possible naming of fixed types. At least the standard now sets a
naming convention.

--
Ian Collins.
Apr 21 '07 #139
Ian Collins said:
Richard Heathfield wrote:
>Ian Collins said:
>>>When ever C hits the real world, developers have added fixed size
types.
Now they are standardised. Good.

Whenever language designers specify fixed size types, they have to
come
back later and specify some more. Bad.
If don't specify the type, everyone else who wants it has to add their
own,
Not quite everyone.
so we ended up with typedefs like U8, INT16 and just about every
other possible naming of fixed types. At least the standard now sets
a naming convention.
Which can safely be ignored, on the whole.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Apr 21 '07 #140

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
5230
by: dba_db2 at nospam gmx.net | last post by:
We have got a brand new mutltiprocessor machine IBM pseries server with 2 processors. Now we want to buy a IBM DB2 license for this box. Does anyone know whether it is possible to buy a single processor db2 license for this machine and to configure the db2 software with db2licm just to use one processor.
1
3471
by: Mateusz Rajca | last post by:
Hello, I would like to know how to find the specs of the current running system such as the memory amount and processor speed in C#? Mateusz
3
1433
by: Michel Meex | last post by:
Hello, I have an application, that has been running on a single processor server for more then a year without any problems. Now I'm migrating to a dual processor server, and I'm experiencing problems with threading. The application is actually a kind of job schedular. For all jobs, I can set a recurring interval (daily,weekly, monthly etc) at which the specific job should be started. We program each job, according to an interface....
1
1398
by: Michel Meex | last post by:
Hello, I have an application, that has been running on a single processor server for more then a year without any problems. Now I'm migrating to a dual processor server, and I'm experiencing problems with threading. The application is actually a kind of job schedular. For all jobs, I can set a recurring interval (daily,weekly, monthly etc) at which the specific job should be started. We program each job, according to an interface....
11
2314
by: sunil | last post by:
Dear All, I have created a .Net service in 2.0 and running it on a machine that has a Quad Processor. It is failing with the following error. "Error 1053: The service did not respond to the start or control request in a timely fashion" This is what I saw in event Viewer. Timeout (30000 milliseconds) waiting for the MyService Server service to connect.
5
6587
by: nano2k | last post by:
Hi I need to protect my application in a way. Just a "soft" protection, not a very strong one. So, using WMI I get the processor ID and compare it against a key. Protection works well, until someone (me) decides to clone the system. After cloning, all cloned systems work with the same key. That is, WMI returns the same processor id on all workstations. It seems that Windows "caches" the processor ID in the registry or somewhere else - I...
10
7003
by: WannaKatana | last post by:
I am just wondering why, with nothing else running and executing an update query against a very large table, does Access seem to be causing less than 10% processor usage. Then it says "There is not enough disk space or memory to undo the changes". I have 2 gb RAM, Core 2 duo e6300 processor and plenty of disk space. Why doesn't Access peg the CPU? Joel
11
6487
by: kyosohma | last post by:
Hi, We use a script here at work that runs whenever someone logs into their machine that logs various bits of information to a database. One of those bits is the CPU's model and speed. While this works in 95% of the time, we have some fringe cases where the only thing returned is the processor name. We use this data to help us decide which PCs need to be updated, so it would be nice to have the processor speed in all cases.
2
6178
by: raghavv | last post by:
Hi, I have developed a software.For licensing it i need to access a unique number of a computer like mother board id or processor id. Is there a way to get this info.In C# we can use system.management to get this info.Is there anything similar to this in java. If not How can i do this. Thank you...
0
8638
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9202
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
8952
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8909
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
7791
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
6555
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
4395
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4649
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
3
2018
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.