473,750 Members | 2,302 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

32 or 64 bit processor info in C

Hello,

Is there a way in C to get information at runtime if a processor is 32
or 64 bit?

Cheers,

Broeisi

Apr 10 '07
168 7260
On Apr 18, 10:02 pm, Eric Sosman <Eric.Sos...@su n.comwrote:
Richard Heathfield wrote On 04/18/07 15:43,:
As for themallocexampl e, I myself usually write an
assignment statement and a separate test. This is not so
much out of a concern that the whole thing would be too
long, but to direct the focus: "I will now allocate some
memory. (By the way, I'll also check for failure.)" But
sometimes I'll gang the whole thing together, particularly
during an initialization where I'm just going to exit the
program on a failure:

if ( (buff1 =malloc(N1 * sizeof *buff1) == NULL
|| (buff2 =malloc(N2 * sizeof *buff2) == NULL
|| (buff3 =malloc(N3 * sizeof *buff3) == NULL ) {
perror ("malloc");
fputs ("No memory; bye-bye!\n", stderr);
exit (EXIT_FAILURE);
}

I think this is easier to read than three assignments, three
tests, and three error-exits, or than shuffling the test-and-
exit off to a wrapper function -- although I do *that* too,
sometimes. (Note that three assignments followed by one
three-way test and one error-exit is not quite the same: ifmalloc() sets errno, the successful allocation of buff3 could
obscure why buff2's allocation failed. malloc() need not set
errno and some do not, but I take the optimistic view and try
to give the poor user all the available diagnoses, even if
they're suspect.)
This doesn't seem very sensible to me. I mean, you already know why
malloc() has failed (how many different ways can you say "out of
memory"?), so if I was implementing malloc I definitely wouldn't
bother setting errno, and if I'm recovering from a failed malloc then
why would I risk giving the user a mystifying spurious error message
resulting from errno being set and never cleared half a page of code
above?
>Does your family tree blossom with
lawyers, politicians, marketeers, and spin doctors?
I think that's a touch unfair. I'm actually on your side in this
discussion, but I think you overstate the case here.

Tastes vary. Or, "There's no point arguing with Gus."

--
Eric.Sos...@sun .com

Apr 21 '07 #151
Fr************@ googlemail.com wrote:
On Apr 18, 10:02 pm, Eric Sosman <Eric.Sos...@su n.comwrote:
>>
[...] if malloc() sets errno, the successful allocation of buff3 could
obscure why buff2's allocation failed. malloc() need not set
errno and some do not, but I take the optimistic view and try
to give the poor user all the available diagnoses, even if
they're suspect.)

This doesn't seem very sensible to me. I mean, you already know why
malloc() has failed (how many different ways can you say "out of
memory"?), so if I was implementing malloc I definitely wouldn't
bother setting errno, and if I'm recovering from a failed malloc then
why would I risk giving the user a mystifying spurious error message
resulting from errno being set and never cleared half a page of code
above?
How many ways can you say "out of memory?" More than one,
I'm sure. A few possibilities:

- "Out of memory" (the basic bleat)

- "Memory quota exceeded" (maybe if the user petitions the
sysadmin for an increased quota all will be well)

- "No more swap space" (maybe if more swap can be allocated
all will be well)

- "Resource temporarily unavailable" (yes, I've seen this one)

- "No error" (I've seen this one, too)

.... and probably others, too. The point is that a library function
may have several reasons for failing, and different reasons may
suggest different responses or corrections. IMHO it's better to
pass along whatever diagnostic information the implementation is
willing to provide than to throw a blanket over it and force the
user to guess about the reasons. Sometimes the diagnostic data is
misleading ("malloc: connection reset by peer"), but when it isn't
it can be most helpful.

--
Eric Sosman
es*****@acm-dot-org.invalid
Apr 21 '07 #152
CBFalconer wrote:
Ian Collins wrote:
>>Malcolm McLean wrote:

.... snip ...
>>>Data is almost always either real numbers, strings, booleans,
enumerated symbols, or indices into arrays (technically a subset
of keys). So double or float, char *, and int should be basically

all you need.

Never written any device drivers or protocol stacks then?

(val & 0xff) is fairly well guaranteed to carry exactly 8 bits.
True, but it's a bit silly when one can use a fixed size type to
represent a register or fields in a packet header.

--
Ian Collins.
Apr 21 '07 #153
Malcolm McLean wrote:
>
"Walter Roberson" <ro******@ibd.n rc-cnrc.gc.cawrote :
>>
Are we talking about C for general purpose computing, or are we talking
about imposing non-trivial architecture restrictions on the machines
that will use this modified C?
We're talking about what int should be on a typical 64-bit machine. I'm
arguing 64 bits, the emerging convention is 32 bits, which I oppose.
You appear to consistently miss the point that 64 bit systems and the
LP64 integer model has been in widespread use for over a decade. So the
convention has well and truly emerged, settled down and had kids.

--
Ian Collins.
Apr 21 '07 #154
Fr************@ googlemail.com writes:
On Apr 18, 10:02 pm, Eric Sosman <Eric.Sos...@su n.comwrote:
[...]
> if ( (buff1 =malloc(N1 * sizeof *buff1) == NULL
|| (buff2 =malloc(N2 * sizeof *buff2) == NULL
|| (buff3 =malloc(N3 * sizeof *buff3) == NULL ) {
perror ("malloc");
fputs ("No memory; bye-bye!\n", stderr);
exit (EXIT_FAILURE);
}

I think this is easier to read than three assignments, three tests,
and three error-exits, or than shuffling the test-and- exit off to
a wrapper function -- although I do *that* too, sometimes. (Note
that three assignments followed by one three-way test and one
error-exit is not quite the same: ifmalloc() sets errno, the
successful allocation of buff3 could obscure why buff2's allocation
failed. malloc() need not set errno and some do not, but I take
the optimistic view and try to give the poor user all the available
diagnoses, even if they're suspect.)

This doesn't seem very sensible to me. I mean, you already know why
malloc() has failed (how many different ways can you say "out of
memory"?), so if I was implementing malloc I definitely wouldn't
bother setting errno, and if I'm recovering from a failed malloc then
why would I risk giving the user a mystifying spurious error message
resulting from errno being set and never cleared half a page of code
above?
On one system (Solaris 9), the malloc man page says that a failing
malloc() can set errno to either of two values:

The malloc(), calloc(), and realloc() functions will fail
if:

ENOMEM
The physical limits of the system are exceeded by size
bytes of memory which cannot be allocated.

EAGAIN
There is not enough memory available to allocate size
bytes of memory; but the application could try again
later.

On another (Red Hat Linux), the man page says:

The Unix98 standard requires malloc(), calloc(), and realloc()
to set errno to ENOMEM upon failure. Glibc assumes that this is
done (and the glibc versions of these routines do this); if you
use a private malloc implementation that does not set errno,
then certain library routines may fail without having a reason
in errno.

The gory details are strictly off-topic, of course, but the point is
that it is possible for malloc() to provide more information in errno
than just "not enough memory". Or it can not bother setting errno at
all. But if you set errno to 0 before the call, you can *probably*
assume that if malloc() failed *and* errno != 0, then the value of
errno is meaningful.

That's not necessarily true for library functions in general, though.
A function might use other functions internally; those functions might
set errno on failure even if the calling function doesn't. It's not
uncommon for a successful fopen() call to set errno to some non-zero
value; it's also possible for a failing function to indirectly set
errno to a non-zero value that doesn't reflect the actual cause of the
error.

<OT>I think POSIX makes more guarantees in this area.</OT>

--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Apr 21 '07 #155

"Eric Sosman" <es*****@acm-dot-org.invalidwrot e in message
news:ap******** *************** *******@comcast .com...
>
How many ways can you say "out of memory?" More than one,
I'm sure. A few possibilities:
If you ask for a trivial amount of memory on a big system then it is much
more likely that the computer has broken than that it is genuinely out of
memory.

If you ask for a large amount then it may not have enough installed to do
the calculation.

The message you want to send to the user is different. Also in the second
case it is worth anticipating the failure and having a recovery strategy; in
the first it is probably futile - barring safety critical systems and the
like.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Apr 21 '07 #156
Malcolm McLean wrote, On 21/04/07 17:45:
>
"Flash Gordon" <sp**@flash-gordon.me.ukwro te in message
news:5p******** ****@news.flash-gordon.me.uk...
>>
Some of the DSPs with 24 or 48 bit integer registers and ALUs will
have had 16/32 bit address busses. However, I suppose Malcolm might be
happy with an int larger than the address bus by a few bits.
Ideally you'd have one extra bit so that signed arithmetic couldn't fail
to have enough resolution. However at least in C the convention is that
indices are scaled by the data type. So the problem only arises for char
arrays taking up over half the address space. That happens rarely enough
for it to be reasonable to say "many functions may not work on your
dataset, you will have to code specially with unsigned types".
You entirely missed the point that on those processors in int is WIDER
than the address bus. You also ignored (and snipped) my other points.
Most architectures don't distinguish between address and data integer
registers,
Most that you have come across possibly, but I doubt that you have come
across most architectures.
so it makes sense to use the same registers to hold pointers
and ints.
It is not simply the width of the address register that is important, it
is also the memory bandwidth. Since memory is slow using 64 bit integers
when you do not need them can slow things significantly.
--
Flash Gordon
Apr 21 '07 #157
In article <j9************ *************** ***@bt.com>,
Malcolm McLean <re*******@btin ternet.comwrote :
>
"Walter Roberson" <ro******@ibd.n rc-cnrc.gc.cawrote in message
news:f0******* ***@canopus.cc. umanitoba.ca...
>In article <C_************ *************** ***@bt.com>,
Malcolm McLean <re*******@btin ternet.comwrote :
>>>Most architectures don't distinguish between address and data integer
registers, so it makes sense to use the same registers to hold pointers
and
ints.
>Are we talking about C for general purpose computing, or are we talking
about imposing non-trivial architecture restrictions on the machines
that will use this modified C?
>We're talking about what int should be on a typical 64-bit machine.
Is that a specific "typical 64-bit machine", or a generalized 64
bit machine?
>I'm
arguing 64 bits, the emerging convention is 32 bits, which I oppose.
The machine I'm using right now is a 64 bit machine, a very typical
one at the time it was made. int, long and pointer are all 32 bits
on it; long long is 64 bits (and fully supported by the architecture.)

I could complain to the designers about them following your
so-called "emerging convention", but I would have to pull some of them
out of retirement to do so, considering that the model line was
introduced to the market in 1993 and they stopped selling this particular
edition of it in 1996. Yes, LL64 machines have already been on the
market for 14 years, and Yes, my deskside 64 bit machine is 12 years old.

The company that made my machine has made some of the largest
single-image compute clusters in the world (i.e., a single operating
system instance is controlling the entire cluster), and oddly those compute
clusters all use int of 32 bits. We're talking machines with multiple
terabytes of cache-coherent RAM (accessible from any program),
and petabytes of disk storage. But somehow in that decade+ of
building record-breaking computers, they missed that simple trick
of just making int 64 bits.

Boy I bet they're sorry in retrospect -- just think, if they had had
your wisdom, then instead of merely building the biggest computers on
Earth, they could have built the biggest computers in the Solar System!
(Oh wait, they did that. Nevermind.)
>We're
not taling about removing latitude from the language so that DSP chips and
the like can't use funny integer sizes if it is appropriate for them, nor
are we talking about modifying the standard.
Let's see if I have this straight: you don't want to modify the
standard, you just want the major compiler and OS and chip vendors to
come to their senses and modify their software and instruction
architectures to -de facto- standardize on 64 bit int, because
that's The Right Thing To Do? Is that like, "I would never legislate
a state religion: I would just organize a large-scale boycott campaign
to talk convince people to voluntarily see the error of their ways if
they don't adopt mine!" ?
--
Okay, buzzwords only. Two syllables, tops. -- Laurie Anderson
Apr 22 '07 #158

"Walter Roberson" <ro******@ibd.n rc-cnrc.gc.cawrote in message
news:f0******** **@canopus.cc.u manitoba.ca...
In article <j9************ *************** ***@bt.com>,
Malcolm McLean <re*******@btin ternet.comwrote :
>>
"Walter Roberson" <ro******@ibd.n rc-cnrc.gc.cawrote in message
news:f0****** ****@canopus.cc .umanitoba.ca.. .
>>In article <C_************ *************** ***@bt.com>,
Malcolm McLean <re*******@btin ternet.comwrote :
Most architectures don't distinguish between address and data integer
registers , so it makes sense to use the same registers to hold pointers
and
ints.
>>Are we talking about C for general purpose computing, or are we talking
about imposing non-trivial architecture restrictions on the machines
that will use this modified C?
>>We're talking about what int should be on a typical 64-bit machine.

Is that a specific "typical 64-bit machine", or a generalized 64
bit machine?
>>I'm
arguing 64 bits, the emerging convention is 32 bits, which I oppose.

The machine I'm using right now is a 64 bit machine, a very typical
one at the time it was made. int, long and pointer are all 32 bits
on it; long long is 64 bits (and fully supported by the architecture.)

I could complain to the designers about them following your
so-called "emerging convention", but I would have to pull some of them
out of retirement to do so, considering that the model line was
introduced to the market in 1993 and they stopped selling this particular
edition of it in 1996. Yes, LL64 machines have already been on the
market for 14 years, and Yes, my deskside 64 bit machine is 12 years old.

The company that made my machine has made some of the largest
single-image compute clusters in the world (i.e., a single operating
system instance is controlling the entire cluster), and oddly those
compute
clusters all use int of 32 bits. We're talking machines with multiple
terabytes of cache-coherent RAM (accessible from any program),
and petabytes of disk storage. But somehow in that decade+ of
building record-breaking computers, they missed that simple trick
of just making int 64 bits.

Boy I bet they're sorry in retrospect -- just think, if they had had
your wisdom, then instead of merely building the biggest computers on
Earth, they could have built the biggest computers in the Solar System!
(Oh wait, they did that. Nevermind.)
>>We're
not taling about removing latitude from the language so that DSP chips and
the like can't use funny integer sizes if it is appropriate for them, nor
are we talking about modifying the standard.

Let's see if I have this straight: you don't want to modify the
standard, you just want the major compiler and OS and chip vendors to
come to their senses and modify their software and instruction
architectures to -de facto- standardize on 64 bit int, because
that's The Right Thing To Do? Is that like, "I would never legislate
a state religion: I would just organize a large-scale boycott campaign
to talk convince people to voluntarily see the error of their ways if
they don't adopt mine!" ?
Never heard of the software crisis? The hardware people have got their act
together and are giving us lots of cheap processing power. The software
people, us, haven't. So all this bluster about mighty mainframes is beside
the point.

One important reason why projects fail over go over budget is that it is too
difficult to get components to talk to each other, usually because of the
lack of standard conventions for interfacing. The reduction in the number of
data types swilling round the pond is a small part, though only a small
part, in alleviating this. It is relatively rare for a project to fail
because the hardware is 10% too slow to run it, and therefore an improvement
in cache coherency might fix it.

Though mainframes are an important part of the computing world, they tend to
be staffed by teams of professional programmers, and run software specially
written for them. The programs installed and the data run through them can
be strictly controlled.
In a consumer environment things are very different. Small companies are
trying to write software with very limited resources which will be run on
hardware they can't exactly specify, and will have to exchange data with
programs they know nothing about. If there is a bug you don't have a list of
installations and an agreed patch schedule. If Microsoft decide that your
compiler will no longer run of Windows Vista, you have no choice but to run
the code through a different compiler.
However if your mainframe has 32 bit pointers then it has a 32-bit limit on
the size of data objects, and so 32-bit ints are OK. Presumably it rations
processes to 4GB each of memory, even if underneath it is doing all its
memory calculations in 64 bits.
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Apr 22 '07 #159

"Flash Gordon" <sp**@flash-gordon.me.ukwro te in message
news:hj******** ****@news.flash-gordon.me.uk...
Malcolm McLean wrote, On 21/04/07 17:45:
>Most architectures don't distinguish between address and data integer
registers,

Most that you have come across possibly, but I doubt that you have come
across most architectures.
There is an interesting social factor here. Virtually everyone has access to
a PC, quite a high proportion of them have a C compiler installed. Quite a
lot of programmer never need to touch another platform. So those who program
other devices consider themselves to be superior beings.

Few people if any program every architecture. However as it happens I am one
of the superior beings.
>
so it makes sense to use the same registers to hold pointers
and ints.

It is not simply the width of the address register that is important, it
is also the memory bandwidth. Since memory is slow using 64 bit integers
when you do not need them can slow things significantly.
That is a good argument. You could of course always say that it is up to the
programmer to use a shorter type if he wants performance, or you could take
the ANSI route and make everything take and return a size_t, so that sizeof
int is irrelevant. But the reality is that most people will use a kludge of
ints and other types. So is speed more important than code falling over for
large arrays and easy interfacing? How much speed are we talking about? As I
said, engineering is like that. Normally there are some good arguments for
the opposite decision.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm
Apr 22 '07 #160

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
5237
by: dba_db2 at nospam gmx.net | last post by:
We have got a brand new mutltiprocessor machine IBM pseries server with 2 processors. Now we want to buy a IBM DB2 license for this box. Does anyone know whether it is possible to buy a single processor db2 license for this machine and to configure the db2 software with db2licm just to use one processor.
1
3476
by: Mateusz Rajca | last post by:
Hello, I would like to know how to find the specs of the current running system such as the memory amount and processor speed in C#? Mateusz
3
1441
by: Michel Meex | last post by:
Hello, I have an application, that has been running on a single processor server for more then a year without any problems. Now I'm migrating to a dual processor server, and I'm experiencing problems with threading. The application is actually a kind of job schedular. For all jobs, I can set a recurring interval (daily,weekly, monthly etc) at which the specific job should be started. We program each job, according to an interface....
1
1401
by: Michel Meex | last post by:
Hello, I have an application, that has been running on a single processor server for more then a year without any problems. Now I'm migrating to a dual processor server, and I'm experiencing problems with threading. The application is actually a kind of job schedular. For all jobs, I can set a recurring interval (daily,weekly, monthly etc) at which the specific job should be started. We program each job, according to an interface....
11
2320
by: sunil | last post by:
Dear All, I have created a .Net service in 2.0 and running it on a machine that has a Quad Processor. It is failing with the following error. "Error 1053: The service did not respond to the start or control request in a timely fashion" This is what I saw in event Viewer. Timeout (30000 milliseconds) waiting for the MyService Server service to connect.
5
6590
by: nano2k | last post by:
Hi I need to protect my application in a way. Just a "soft" protection, not a very strong one. So, using WMI I get the processor ID and compare it against a key. Protection works well, until someone (me) decides to clone the system. After cloning, all cloned systems work with the same key. That is, WMI returns the same processor id on all workstations. It seems that Windows "caches" the processor ID in the registry or somewhere else - I...
10
7009
by: WannaKatana | last post by:
I am just wondering why, with nothing else running and executing an update query against a very large table, does Access seem to be causing less than 10% processor usage. Then it says "There is not enough disk space or memory to undo the changes". I have 2 gb RAM, Core 2 duo e6300 processor and plenty of disk space. Why doesn't Access peg the CPU? Joel
11
6493
by: kyosohma | last post by:
Hi, We use a script here at work that runs whenever someone logs into their machine that logs various bits of information to a database. One of those bits is the CPU's model and speed. While this works in 95% of the time, we have some fringe cases where the only thing returned is the processor name. We use this data to help us decide which PCs need to be updated, so it would be nice to have the processor speed in all cases.
2
6178
by: raghavv | last post by:
Hi, I have developed a software.For licensing it i need to access a unique number of a computer like mother board id or processor id. Is there a way to get this info.In C# we can use system.management to get this info.Is there anything similar to this in java. If not How can i do this. Thank you...
0
8999
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8836
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9575
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
8260
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
6803
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6080
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4885
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3322
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
2798
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.