473,386 Members | 1,846 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

dependency of sizeof operator

Output of sizeof(int) varies.But on what it depends.Can anybody please
help me to find out if it is compiler dependent or operating system
dependent?

Mar 23 '07 #1
24 1628
It depends on the machine you are using and the compiler also. e.g if
you use turboC/C++ to get sizeof (int) you get 2 but on the same
machine if you use gcc or MS Visual studio sizeof(int)=4 .

Mar 23 '07 #2
"vandana" <va*************@gmail.comwrites:
Output of sizeof(int) varies.But on what it depends.Can anybody please
help me to find out if it is compiler dependent or operating system
dependent?
It depends on the C implementation; roughly speaking, on the
compiler. But the C implementation is often strongly influenced
by the operating system and the CPU architecture.
--
Comp-sci PhD expected before end of 2007
Seeking industrial or academic position *outside California* in 2008
Mar 23 '07 #3
"vandana" <va*************@gmail.comwrote in message
Output of sizeof(int) varies.But on what it depends.Can anybody please
help me to find out if it is compiler dependent or operating system
dependent?
int should be the same size as a register. However some wicked people have
implemented 32 bit ints on 64-bit machines. So it is up to the compiler
writer.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Mar 23 '07 #4
On Mar 23, 4:10 pm, "Malcolm McLean" <regniz...@btinternet.comwrote:
"vandana" <vandanagupta...@gmail.comwrote in message
Output of sizeof(int) varies.But on what it depends.Can anybody please
help me to find out if it is compiler dependent or operating system
dependent?

int should be the same size as a register. However some wicked people have
implemented 32 bit ints on 64-bit machines. So it is up to the compiler
writer.
There is no requirement for an int to match the register size of the
machine it is running on so why would you say it should be that size?

Mar 23 '07 #5
<sw***********@gmail.comwrote in message
On Mar 23, 4:10 pm, "Malcolm McLean" <regniz...@btinternet.comwrote:
>"vandana" <vandanagupta...@gmail.comwrote in message
Output of sizeof(int) varies.But on what it depends.Can anybody please
help me to find out if it is compiler dependent or operating system
dependent?

int should be the same size as a register. However some wicked people
have
implemented 32 bit ints on 64-bit machines. So it is up to the compiler
writer.
There is no requirement for an int to match the register size of the
machine it is running on so why would you say it should be that size?
It was always the intention that int should be the natural word size, or in
other words a register. That is why it is not a fixed width, and why
functions used to implicitly return ints.
Since the standard makes no guarantees about execution time it wouldn't be
sensible to make this a requirement. We are not yet comp.lang.ANSIc and
"should" is not synonymous with "ANSI says".
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Mar 24 '07 #6
In article <Wu******************************@bt.com>,
Malcolm McLean <re*******@btinternet.comwrote:
>It was always the intention that int should be the natural word size,
Modern processors typically don't have only one natural word size.
Just because a processor can manipulate 64-bit words as a unit doesn't
mean that 32-bit words are no longer the natural choice for many
purposes.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Mar 24 '07 #7
Malcolm McLean wrote, On 23/03/07 20:10:
"vandana" <va*************@gmail.comwrote in message
>Output of sizeof(int) varies.But on what it depends.Can anybody please
help me to find out if it is compiler dependent or operating system
dependent?
int should be the same size as a register. However some wicked people
have implemented 32 bit ints on 64-bit machines. So it is up to the
compiler writer.
What if the inputs to the ALU (and multiplier) are 16 bits but the
actual accumulator and product register are all 32 bits? Before you ask
who would do something that screwy, it is Texas Instruments who are not
exactly small in the DSP market.
--
Flash Gordon
Mar 24 '07 #8
On Fri, 23 Mar 2007 20:10:48 -0000, "Malcolm McLean"
<re*******@btinternet.comwrote in comp.lang.c:
"vandana" <va*************@gmail.comwrote in message
Output of sizeof(int) varies.But on what it depends.Can anybody please
help me to find out if it is compiler dependent or operating system
dependent?
int should be the same size as a register. However some wicked people have
implemented 32 bit ints on 64-bit machines. So it is up to the compiler
writer.
No, it should not necessarily be the same size as a register. The C
Standard _specifically_ state:

A ‘‘plain’’ int object has the natural size suggested by the
architecture of the execution environment (large enough to contain any
value in the range INT_MIN to INT_MAX as defined in the header
<limits.h>).

There are an enormous number of 8-bit controllers and processors
produced running C code each year, at least an order of magnitude more
of them then there are of the desktop CPUs that you seem to think
constitute the whole world. Some of them could not support a C
implementation if int had to be the same size as a register, because
they have no registers wider than 8 bits.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
Mar 24 '07 #9

"Jack Klein" <ja*******@spamcop.netwrote in message
There are an enormous number of 8-bit controllers and processors
produced running C code each year, at least an order of magnitude more
of them then there are of the desktop CPUs that you seem to think
constitute the whole world. Some of them could not support a C
implementation if int had to be the same size as a register, because
they have no registers wider than 8 bits.
Nowadays I program mainly either desktops or Beowulf clusters, however I
used to be a games programmer in times past.
Actually the C compiler on one small embedded system I never actually used,
though I prepared a bid for a project with it, had ints of 8 bits. I suppose
now you'll say the failure to get the contract was a reflection of my
incompetence, which I suppose it was.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Mar 24 '07 #10
"Malcolm McLean" <re*******@btinternet.comwrites:
"Jack Klein" <ja*******@spamcop.netwrote in message
>There are an enormous number of 8-bit controllers and processors
produced running C code each year, at least an order of magnitude more
of them then there are of the desktop CPUs that you seem to think
constitute the whole world. Some of them could not support a C
implementation if int had to be the same size as a register, because
they have no registers wider than 8 bits.
Nowadays I program mainly either desktops or Beowulf clusters, however
I used to be a games programmer in times past.
Actually the C compiler on one small embedded system I never actually
used, though I prepared a bid for a project with it, had ints of 8
bits.
In a conforming C implementation, int must be at least 16 bits (more
precisely, it must be able to represent all value in the range -32767
... +32767). The implementation you describe may well have been quite
useful, but it wasn't a conforming C implementation. Or it may have
been produced before the C89 standard came out, in which case there
was no standard to conform to.

As long as it didn't *claim* to conform to the C standard, I have no
objection to such a thing, though I'd prefer that it not refer to
itself as "C" without qualification.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Mar 24 '07 #11
"Keith Thompson" <ks***@mib.orgwrote
"Malcolm McLean" <re*******@btinternet.comwrites:
>Nowadays I program mainly either desktops or Beowulf clusters, however
I used to be a games programmer in times past.
Actually the C compiler on one small embedded system I never actually
used, though I prepared a bid for a project with it, had ints of 8
bits.

In a conforming C implementation, int must be at least 16 bits (more
precisely, it must be able to represent all value in the range -32767
.. +32767). The implementation you describe may well have been quite
useful, but it wasn't a conforming C implementation. Or it may have
been produced before the C89 standard came out, in which case there
was no standard to conform to.

As long as it didn't *claim* to conform to the C standard, I have no
objection to such a thing, though I'd prefer that it not refer to
itself as "C" without qualification.
It was an under-powered little thing with something like 4K of RAM, all on
the same chip, and it was proposed to use it to run a parking ticket vending
machine. The customer also asked of an "object-oriented design". I said I
couldn't do OOD in 4K, which made their technical guy look like an idiot,
and I strongly suspect that that was why we lost the contract. I'm too good
at antagonising people.
It stored all its variables in fixed locations and so didn't support
recursion. However it had a full math library with 3-byte doubles. longs
were 16 bits. It didn't have any IO of course, you had to write that
yourself, on hardware that was being developed. That might have been another
reason we lost the contract. I hadn't had any experience of developing
software for buggy hardware, so I told my boss that I was sceptical about
the alleged time estimations the customer had prepared.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Mar 25 '07 #12
"Malcolm McLean" <regniz...@btinternet.comwrote:
"vandana" <vandanagupta...@gmail.comwrote in message
Output of sizeof(int) varies.But on what it depends.Can
anybody please help me to find out if it is compiler
dependent or operating system dependent?

int should be the same size as a register.
[Jack's talked about 8-bit cpus, but...]

What's the register size for an interpreted implementation
of C? On x86 machines, should int be 16-bit to match the ax
register, or 32-bit to match the eax register?

--
Peter

Mar 26 '07 #13
"Peter Nilsson" <ai***@acay.com.auwrote in message
"Malcolm McLean" <regniz...@btinternet.comwrote:
>"vandana" <vandanagupta...@gmail.comwrote in message
Output of sizeof(int) varies.But on what it depends.Can
anybody please help me to find out if it is compiler
dependent or operating system dependent?

int should be the same size as a register.

[Jack's talked about 8-bit cpus, but...]

What's the register size for an interpreted implementation
of C? On x86 machines, should int be 16-bit to match the ax
register, or 32-bit to match the eax register?
Big enough to index an array. Which means 16 bits on the small memory models
and 32 bits on the large ones. However you could justifiably say that it is
more important to be consistent across models than to offer ints that work
as expected. Conventions and standards can't possibly be expected to cover
every weird architecture.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Mar 26 '07 #14
Malcolm McLean wrote, On 26/03/07 20:43:
"Peter Nilsson" <ai***@acay.com.auwrote in message
>"Malcolm McLean" <regniz...@btinternet.comwrote:
>>"vandana" <vandanagupta...@gmail.comwrote in message
Output of sizeof(int) varies.But on what it depends.Can
anybody please help me to find out if it is compiler
dependent or operating system dependent?

int should be the same size as a register.

[Jack's talked about 8-bit cpus, but...]

What's the register size for an interpreted implementation
of C? On x86 machines, should int be 16-bit to match the ax
register, or 32-bit to match the eax register?
Big enough to index an array. Which means 16 bits on the small memory
models and 32 bits on the large ones.
So given a processor with 16 bit arithmetic registers and a memory space
larger than 16 bits you want to slow down arithmetic with int or limit
the size of arrays.
However you could justifiably say
that it is more important to be consistent across models than to offer
ints that work as expected. Conventions and standards can't possibly be
expected to cover every weird architecture.
The standard is expected to cover every wird architecture, that is part
of the point of it.
--
Flash Gordon
Mar 27 '07 #15

"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
Malcolm McLean wrote
>Big enough to index an array. Which means 16 bits on the small memory
models and 32 bits on the large ones.

So given a processor with 16 bit arithmetic registers and a memory space
larger than 16 bits you want to slow down arithmetic with int or limit the
size of arrays.
You've got to ask what sort of person would produce a machine with a data
register size narrower than the address bus. The obvious answer is someone
who thinks that a big memory is more important than speed in accessing it.
So it is unlikely to matter that we are slowing down integer arithemetic.

Addmittedly the convention, though not the standard, is breaking down in
such a case. Most computers spend most of their time computing array offsets
and fetching and writing data to and from them, certainly as far as integer
operations are concerned. However that is not true of every program, of
course, and so a very integer maths-intensive programmer would complain,
with justice, that int was not the fastest type. You've got to balance this
against the inconvenience of not being able to index an array with an int.
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Mar 27 '07 #16
Malcolm McLean wrote, On 27/03/07 22:11:
>
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
>Malcolm McLean wrote
>>Big enough to index an array. Which means 16 bits on the small memory
models and 32 bits on the large ones.

So given a processor with 16 bit arithmetic registers and a memory
space larger than 16 bits you want to slow down arithmetic with int or
limit the size of arrays.
You've got to ask what sort of person would produce a machine with a
data register size narrower than the address bus. The obvious answer is
someone who thinks that a big memory is more important than speed in
accessing it. So it is unlikely to matter that we are slowing down
integer arithemetic.
Lets start with the 80386SX which had a 16 bit data bus and a 24 bit
address buss, so fetching anything larger would require more than one
fetch. Run it with the large memory model so that you can have large
objects! Just the first off the top of my head ;-)
http://www.intel.com/design/intarch/intel386/index.htm
Addmittedly the convention, though not the standard, is breaking down in
such a case.
Cases which keep coming and going.
Most computers spend most of their time computing array
offsets and fetching and writing data to and from them, certainly as far
as integer operations are concerned.
They may with your applications but I know a number of server and client
applications which only spend a small fraction of their time doing array
index operations. In fact, I can only think of one part of one
application where that was true.
However that is not true of every
program, of course, and so a very integer maths-intensive programmer
would complain, with justice, that int was not the fastest type. You've
got to balance this against the inconvenience of not being able to index
an array with an int.
Do you really have any information at all to back up your claims? Since
I really do find that the majority of applications I have dealt with
over the years do more integer arithmetic because they wanted to do
integer arithmetic than because they were calculating array offsets.
Also, a number of processors have alternative ways of doing indexing
that do not involve normal arithmetic registers at all, so your argument
even if true would be irrelevant for them.
--
Flash Gordon
In assembler I once ended up using the addressing units on one processor
to do integer arithmetic (not addressing related) so I could get a
higher throughput of integer operations.
Mar 27 '07 #17
Op Tue, 27 Mar 2007 23:33:08 +0100 schreef Flash Gordon:
Lets start with the 80386SX which had a 16 bit data bus and a 24 bit
address buss, so fetching anything larger would require more than one
fetch. Run it with the large memory model so that you can have large
objects! Just the first off the top of my head ;-)
http://www.intel.com/design/intarch/intel386/index.htm
The 68000 had also a 16 bit external data bus, but internally both were 32
bits. I think they were 32 bit CPUs although the '486 and 68020 were true
ones, as they had a 32 bit external data bus.
Would you call the 68008 and 8088 8 bitters?
--
Coos
Mar 28 '07 #18

"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:rm************@news.flash-gordon.me.uk...
Malcolm McLean wrote, On 27/03/07 22:11:
Most computers spend most of their time computing array
offsets and fetching and writing data to and from them, certainly as far
as integer operations are concerned.

They may with your applications but I know a number of server and client
applications which only spend a small fraction of their time doing array
index operations. In fact, I can only think of one part of one application
where that was true.
However that is not true of every
program, of course, and so a very integer maths-intensive programmer
would complain, with justice, that int was not the fastest type. You've
got to balance this against the inconvenience of not being able to index
an array with an int.

Do you really have any information at all to back up your claims? Since I
really do find that the majority of applications I have dealt with over
the years do more integer arithmetic because they wanted to do integer
arithmetic than because they were calculating array offsets. Also, a
number of processors have alternative ways of doing indexing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^
that do not involve normal arithmetic registers at all, so your argument
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^
even if true would be irrelevant for them.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--
Flash Gordon
In assembler I once ended up using the addressing units on one processor
to do integer arithmetic (not addressing related) so I could get a higher
throughput of integer operations.
You're making an elementary mistake. If an integer is used to index an
array, all operations used to calculate that index are indexing operations,
not just the ones within the square brackets.
You'll find that most genuinely numerical data is real and therefore stored
as floating points. I don't say absolutely all - an exception would be a
program that does very intensive work with 24-bit rgba values, or programs
which operate on data which is inherently real but stored in fixed-point
format for speed. Generally however integer are for counting things, and
those things tend to be items stored in the computer.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm
Mar 28 '07 #19
Coos Haak wrote, On 28/03/07 01:23:
Op Tue, 27 Mar 2007 23:33:08 +0100 schreef Flash Gordon:
>Lets start with the 80386SX which had a 16 bit data bus and a 24 bit
address buss, so fetching anything larger would require more than one
fetch. Run it with the large memory model so that you can have large
objects! Just the first off the top of my head ;-)
http://www.intel.com/design/intarch/intel386/index.htm

The 68000 had also a 16 bit external data bus, but internally both were 32
bits. I think they were 32 bit CPUs although the '486 and 68020 were true
ones, as they had a 32 bit external data bus.
Would you call the 68008 and 8088 8 bitters?
That depends on what I was trying to prove ;-)

BTW, I do remember the 68K series. Nice processors.
--
Flash Gordon
Mar 28 '07 #20
Malcolm McLean wrote, On 28/03/07 20:12:
>
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:rm************@news.flash-gordon.me.uk...
>Malcolm McLean wrote, On 27/03/07 22:11:
Most computers spend most of their time computing array
offsets and fetching and writing data to and from them, certainly as
far as integer operations are concerned.

They may with your applications but I know a number of server and
client applications which only spend a small fraction of their time
doing array index operations. In fact, I can only think of one part of
one application where that was true.
However that is not true of every
program, of course, and so a very integer maths-intensive programmer
would complain, with justice, that int was not the fastest type.
You've got to balance this against the inconvenience of not being
able to index an array with an int.

Do you really have any information at all to back up your claims?
Since I really do find that the majority of applications I have dealt
with over the years do more integer arithmetic because they wanted to
do integer arithmetic than because they were calculating array
offsets. Also, a number of processors have alternative ways of doing
indexing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^
>that do not involve normal arithmetic registers at all, so your argument
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^
>even if true would be irrelevant for them.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>--
Flash Gordon
In assembler I once ended up using the addressing units on one
processor to do integer arithmetic (not addressing related) so I could
get a higher throughput of integer operations.
You're making an elementary mistake. If an integer is used to index an
array, all operations used to calculate that index are indexing
operations, not just the ones within the square brackets.
No, I'm not. You are making the elementary mistake of assuming you know
more about what I have worked on than I do. You are also making the
mistake of assuming that you know more about the processors I have
worked on than you do. On the processors I was referring to I was doing
most of the calculations that had ANYTHING to do with indexing in the
addressing units (normally adding 1 each time, but sometimes more
convoluted things) and had enough spare power left in the addressing
units that when doing assembler coding I was doing arithmetic in them
that was NOT related to indexing in any way, shape, or form. Unless in
the following:
arr[i] = x*x+y;
You would consider the "x*x+y" to be to do with indexing, in which case
you are not using a sensible definition.
You'll find that most genuinely numerical data is real and therefore
stored as floating points.
All the numerical data on a lot of aircraft relating to relating to
roll, pitch and altitude is passed around as scaled integers, as I know
having worked on a number of them. All the video data in a lot of
systems is passed around and processed as integers, I know having worked
on them.
I don't say absolutely all - an exception
would be a program that does very intensive work with 24-bit rgba
values,
There is a heck of a lot of image processing software around in daily
use, probably rather more than you would expect since I doubt you would
immediately think of some of the stuff I've worked on.
or programs which operate on data which is inherently real but
stored in fixed-point format for speed.
Such as is the case in a lot of aircraft systems, or at least all of the
several systems I have worked on.
Generally however integer are
for counting things, and those things tend to be items stored in the
computer.
How about money? You generally need a finite number of decimal places.
The same applies to most measures in billing systems, invoicing systems,
cost control systems etc. (I work on such systems these days, so I know
how a subset of them work).

Now, do you have any evidence that what you are claiming is actually
true or is it just based on your limited experience?

Note that I'm not claiming that most integer arithmetic is not to do
with indexing, I only have a bit over 20 years experience so there is a
vast amount I have not seen, I am only asking if you have any REAL
justification for believing what you claim, such as a study that
actually does cover a large section of the SW industry.
--
Flash Gordon
Mar 28 '07 #21

"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:mi************@news.flash-gordon.me.uk...
Malcolm McLean wrote, On 28/03/07 20:12:
>>
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:rm************@news.flash-gordon.me.uk...
>>Malcolm McLean wrote, On 27/03/07 22:11:

Most computers spend most of their time computing array
offsets and fetching and writing data to and from them, certainly as
far as integer operations are concerned.

They may with your applications but I know a number of server and client
applications which only spend a small fraction of their time doing array
index operations. In fact, I can only think of one part of one
application where that was true.

However that is not true of every
program, of course, and so a very integer maths-intensive programmer
would complain, with justice, that int was not the fastest type. You've
got to balance this against the inconvenience of not being able to
index an array with an int.

Do you really have any information at all to back up your claims? Since
I really do find that the majority of applications I have dealt with
over the years do more integer arithmetic because they wanted to do
integer arithmetic than because they were calculating array offsets.
Also, a number of processors have alternative ways of doing indexing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^
>>that do not involve normal arithmetic registers at all, so your argument
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^
>>even if true would be irrelevant for them.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>--
Flash Gordon
In assembler I once ended up using the addressing units on one processor
to do integer arithmetic (not addressing related) so I could get a
higher throughput of integer operations.
You're making an elementary mistake. If an integer is used to index an
array, all operations used to calculate that index are indexing
operations, not just the ones within the square brackets.

No, I'm not. You are making the elementary mistake of assuming you know
more about what I have worked on than I do. You are also making the
mistake of assuming that you know more about the processors I have worked
on than you do. On the processors I was referring to I was doing most of
the calculations that had ANYTHING to do with indexing in the addressing
units (normally adding 1 each time, but sometimes more convoluted things)
and had enough spare power left in the addressing units that when doing
assembler coding I was doing arithmetic in them that was NOT related to
indexing in any way, shape, or form. Unless in the following:
arr[i] = x*x+y;
You would consider the "x*x+y" to be to do with indexing, in which case
you are not using a sensible definition.
>You'll find that most genuinely numerical data is real and therefore
stored as floating points.

All the numerical data on a lot of aircraft relating to relating to roll,
pitch and altitude is passed around as scaled integers, as I know having
worked on a number of them. All the video data in a lot of systems is
passed around and processed as integers, I know having worked on them.
I don't say absolutely all - an exception
would be a program that does very intensive work with 24-bit rgba values,

There is a heck of a lot of image processing software around in daily use,
probably rather more than you would expect since I doubt you would
immediately think of some of the stuff I've worked on.
or programs which operate on data which is inherently real but
stored in fixed-point format for speed.

Such as is the case in a lot of aircraft systems, or at least all of the
several systems I have worked on.
Generally however integer are
for counting things, and those things tend to be items stored in the
computer.

How about money? You generally need a finite number of decimal places. The
same applies to most measures in billing systems, invoicing systems, cost
control systems etc. (I work on such systems these days, so I know how a
subset of them work).

Now, do you have any evidence that what you are claiming is actually true
or is it just based on your limited experience?

Note that I'm not claiming that most integer arithmetic is not to do with
indexing, I only have a bit over 20 years experience so there is a vast
amount I have not seen, I am only asking if you have any REAL
justification for believing what you claim, such as a study that actually
does cover a large section of the SW industry.
--
Flash Gordon
Here are some stats collected for Java

x sx y sy r
mul 0.247 0.422 0.367 0.969 0.880
logic 1.816 2.006 3.372 3.982 0.873
iload 14.511 6.332 18.063 9.185 0.847
shift 0.269 0.550 0.465 1.196 0.844
fload 3.461 3.392 6.328 10.089 0.798i
add 0.981 0.837 2.535 2.899 0.788i
div 0.223 0.326 0.186 0.640 0.781
astor 1.293 1.271 0.936 1.524 0.750
fstor 0.870 0.985 1.486 2.333 0.743
f mul 1.095 1.362 2.714 2.595 0.723
f add 0.755 0.849 3.042 3.532 0.697
fcall 10.233 5.845 3.627 4.580 0.694
istor 2.763 2.045 2.505 3.111 0.691
objct 1.461 1.254 0.554 1.267 0.665
fcnst 0.667 0.841 0.332 0.848 0.665
yload 3.717 3.343 7.525 6.502 0.651
f sub 0.523 0.611 1.145 2.212 0.628
icnst 7.126 3.382 3.206 3.037 0.588
aload 17.030 6.243 16.233 8.433 0.554
cjump 3.082 1.356 5.665 3.092 0.546
field 11.777 6.632 11.122 8.813 0.503
ujump 1.697 0.815 0.508 0.683 0.468
compr 0.375 0.441 0.745 2.229 0.443
fdiv 0.315 0.395 0.093 0.203 0.419i
sub 0.768 0.787 0.687 0.968 0.360
array 1.034 0.896 0.224 0.380 -0.355
acnst 0.291 0.346 0.067 0.205 0.289
retrn 3.182 1.539 2.074 3.336 0.258
ystor 3.303 3.841 1.790 1.747 0.183
stack 5.001 3.526 2.400 2.583 0.109
miscl 0.134 0.304 0.005 0.021 0.087F

Figure 4: Summary of results obtained. Here, variables x and y range over
the static and dynamic data respectively and r is the linear correlation
https://www.cs.tcd.ie/John.Waldron/kwilu/mmactee01.ps.

You will see that there are far more iloads (load from stack) and aload
(load object handle) instructions than anything else. Java programs don't
spend their time doing calculations but moving data from one place to
another. There are slightly more floating point arithmetic instructions than
integers ones, but far more integer loads. ystor and yload represent the
array accesses. Assuming plain array access with no calculation of indices
except an increment, each one takes one iload operation, I would guess. The
conclusion is that at least about half of the integer operations are
ultimately going into array index operations, and that most numerical data
is being handled in floating point.

Stats for C will be similar.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Mar 28 '07 #22
Malcolm McLean wrote, On 28/03/07 23:10:
>
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:mi************@news.flash-gordon.me.uk...
>Malcolm McLean wrote, On 28/03/07 20:12:
>>>
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
news:rm************@news.flash-gordon.me.uk...
Malcolm McLean wrote, On 27/03/07 22:11:

Most computers spend most of their time computing array
offsets and fetching and writing data to and from them, certainly
as far as integer operations are concerned.

They may with your applications but I know a number of server and
client applications which only spend a small fraction of their time
doing array index operations. In fact, I can only think of one part
of one application where that was true.

However that is not true of every
program, of course, and so a very integer maths-intensive
programmer would complain, with justice, that int was not the
fastest type. You've got to balance this against the inconvenience
of not being able to index an array with an int.

Do you really have any information at all to back up your claims?
Since I really do find that the majority of applications I have
dealt with over the years do more integer arithmetic because they
wanted to do integer arithmetic than because they were calculating
array offsets. Also, a number of processors have alternative ways of
doing indexing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^
that do not involve normal arithmetic registers at all, so your
argument
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^
even if true would be irrelevant for them.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--
Flash Gordon
In assembler I once ended up using the addressing units on one
processor to do integer arithmetic (not addressing related) so I
could get a higher throughput of integer operations.

You're making an elementary mistake. If an integer is used to index
an array, all operations used to calculate that index are indexing
operations, not just the ones within the square brackets.

No, I'm not. You are making the elementary mistake of assuming you
know more about what I have worked on than I do. You are also making
I see you do not address this. Does that mean you admit your mistake?
>the mistake of assuming that you know more about the processors I have
worked on than you do. On the processors I was referring to I was
doing most of the calculations that had ANYTHING to do with indexing
in the addressing units (normally adding 1 each time, but sometimes
more convoluted things) and had enough spare power left in the
addressing units that when doing assembler coding I was doing
arithmetic in them that was NOT related to indexing in any way, shape,
or form. Unless in the following:
arr[i] = x*x+y;
You would consider the "x*x+y" to be to do with indexing, in which
case you are not using a sensible definition.
I see you do not address this, which would typically involve three
integer loads of which only one has anything to do with indexing and two
other integer operations.
>>You'll find that most genuinely numerical data is real and therefore
stored as floating points.

All the numerical data on a lot of aircraft relating to relating to
roll, pitch and altitude is passed around as scaled integers, as I
know having worked on a number of them. All the video data in a lot of
systems is passed around and processed as integers, I know having
worked on them.
I see you fail to address a significant range of applications which
typically use integer operations for other than indexing purposes. One
in which Java is generally not used.
I don't say absolutely all - an exception
would be a program that does very intensive work with 24-bit rgba
values,

There is a heck of a lot of image processing software around in daily
use, probably rather more than you would expect since I doubt you
would immediately think of some of the stuff I've worked on.
I see you fail to address that here is another significant area where
Java is typically not used that uses vast amounts of integer operations
for other than indexing.
or programs which operate on data which is inherently real but
stored in fixed-point format for speed.

Such as is the case in a lot of aircraft systems, or at least all of
the several systems I have worked on.
Generally however integer are
for counting things, and those things tend to be items stored in the
computer.

How about money? You generally need a finite number of decimal places.
The same applies to most measures in billing systems, invoicing
systems, cost control systems etc. (I work on such systems these days,
so I know how a subset of them work).
Oh look, you fail to address that here is another big area of
applications, which the study you mention explicitly excludes, where
lots of integer operations are performed for purposes other than indexing.
>Now, do you have any evidence that what you are claiming is actually
true or is it just based on your limited experience?
You fail to address this, since it is doubtful that your original
statement was based on the study you quote.
>Note that I'm not claiming that most integer arithmetic is not to do
with indexing, I only have a bit over 20 years experience so there is
a vast amount I have not seen, I am only asking if you have any REAL
justification for believing what you claim, such as a study that
actually does cover a large section of the SW industry.
--
Flash Gordon
Why quote my signature when you are not commenting on it?
Here are some stats collected for Java
Those are statistics about which operations are used on a very different
language, not about what those instructions are used for.
x sx y sy r
mul 0.247 0.422 0.367 0.969 0.880
logic 1.816 2.006 3.372 3.982 0.873
iload 14.511 6.332 18.063 9.185 0.847
<snip>
Figure 4: Summary of results obtained. Here, variables x and y range
over the static and dynamic data respectively and r is the linear
correlation
https://www.cs.tcd.ie/John.Waldron/kwilu/mmactee01.ps.

A quote from the document, "A total of 19 programs have been analysed
for this study." Hardly seems conclusive to me since I have worked on
more than 19 programs.

It also specifically avoided benchmarks designed to simulate server-side
applications, and there are vast numbers of server side applications.
You will see that there are far more iloads (load from stack) and aload
(load object handle) instructions than anything else. Java programs
int squareint(int x)
{
return x*x;
}

Typically, the above will contain one load from the stack and exactly
ZERO indexing operations. Try to find some relevant statistics instead.
don't spend their time doing calculations but moving data from one place
to another.
I can only think of one program I have worked on that spends a
significant amount of the time moving data from one place to another,
and that was code in what was basically a communications hub which was
specifically there to move data from one place to another.

In any case, results for Java are not results for C. Java tends to be
used for OOP, C is frequently used for other than OOP, so there is no
reason to expect the results to be comparable.
There are slightly more floating point arithmetic
instructions than integers ones, but far more integer loads. ystor and
yload represent the array accesses. Assuming plain array access with no
calculation of indices except an increment, each one takes one iload
operation, I would guess. The conclusion is that at least about half of
the integer operations are ultimately going into array index operations,
and that most numerical data is being handled in floating point.
You have no evidence that the bulk of the integer loads from the stack
where loads which had anything to do with indexing, unless you are
claiming that any load of a local variable is indexing.
Stats for C will be similar.
Why? See above. Although I do not believe your analysis of that paper, a
paper which had nothing to do with the topic in question. The study you
quote did analysis on the Java byte code produced by a SMALL set of
benchmarks and was designed to test something completely different. It
therefore proves exactly nothing about your claim.
--
Flash Gordon
Mar 29 '07 #23

"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
Malcolm McLean wrote, On 28/03/07 23:10:
There are slightly more floating point arithmetic
instructions than integers ones, but far more integer loads. ystor and
yload represent the array accesses. Assuming plain array access with no
calculation of indices except an increment, each one takes one iload
operation, I would guess. The conclusion is that at least about half of
the integer operations are ultimately going into array index operations,
and that most numerical data is being handled in floating point.

You have no evidence that the bulk of the integer loads from the stack
where loads which had anything to do with indexing, unless you are
claiming that any load of a local variable is indexing.
>Stats for C will be similar.

Why? See above. Although I do not believe your analysis of that paper, a
paper which had nothing to do with the topic in question. The study you
quote did analysis on the Java byte code produced by a SMALL set of
benchmarks and was designed to test something completely different. It
therefore proves exactly nothing about your claim.
There are often things that you know to be true without having done studies
on them. Often you can avoid the expense of a full study by looking at other
relevant data. We know the number of array loads and stores, we know the
number of integer loads, and we know that there must be atleast one integer
operation for every array access. We also know that floating point number
are not used in array indexing, and we have them to compare.
You can of course claim that C programs are completely different in their
characteristics to Java programs, or that the dataset is too small. If a lot
was at stake then of course we would merely use the intial data to guide our
hypotheses. But a lot isn't at stake. I am happy to spend a few minutes
rooting about on the net for usage statisitics, but not to spend days
tagging C variables as indexing or non-indexing and tracing them back from
the machine code. As it is the claim of "exactly no evidence" must be
withdrawn. But really you should know that computers spend most of their
time moving data about. Approximately one quarter of all mainframe cycles
are used for sorting, for example.
--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Mar 29 '07 #24
Malcolm McLean wrote, On 29/03/07 22:50:
>
"Flash Gordon" <sp**@flash-gordon.me.ukwrote in message
>Malcolm McLean wrote, On 28/03/07 23:10:
There are slightly more floating point arithmetic
instructions than integers ones, but far more integer loads. ystor
and yload represent the array accesses. Assuming plain array access
with no calculation of indices except an increment, each one takes
one iload operation, I would guess. The conclusion is that at least
about half of the integer operations are ultimately going into array
index operations, and that most numerical data is being handled in
floating point.

You have no evidence that the bulk of the integer loads from the stack
where loads which had anything to do with indexing, unless you are
claiming that any load of a local variable is indexing.
>>Stats for C will be similar.

Why? See above. Although I do not believe your analysis of that paper,
a paper which had nothing to do with the topic in question. The study
you quote did analysis on the Java byte code produced by a SMALL set
of benchmarks and was designed to test something completely different.
It therefore proves exactly nothing about your claim.
There are often things that you know to be true without having done
studies on them. Often you can avoid the expense of a full study by
looking at other relevant data. We know the number of array loads and
stores, we know the number of integer loads,
You do NOT know that every integer load has anything to do with arrays.
and we know that there must
be atleast one integer operation for every array access. We also know
that floating point number are not used in array indexing, and we have
them to compare.
The number of floating point operations is completely irrelevant. Unless
you can provide some evidence that the number of integer operations not
to do with indexing are comparable to the number of floating point
operations. Since a lot of code explicitly avoids floating point
operations it is obviously a false assumption for a lot of code.
You can of course claim that C programs are completely different in
their characteristics to Java programs,
Stated a valid reason for such a claim, namely that Java is generally
used for OOP (it being an language designed for it) and C is often used
for thins other than OOP (having not been designed for it, although it
can be done). Seems like an excellent reason for things to be different.
or that the dataset is too
small.
Can you honestly claim that 18 benchmarks for one language is going to
be representative of millions of programs for a drastically different
language?
If a lot was at stake then of course we would merely use the
intial data to guide our hypotheses.
It is your hypothesis not mine, either acept that it is just your
limited experience (everyone's experience is limited compared to the
amount of code written) or tell us what your evidence was for making the
claim. After all, you must already have the evidence if it was not just
your gut feeling based on your experience.
But a lot isn't at stake. I am
happy to spend a few minutes rooting about on the net for usage
statisitics,
Next time try to find some relevant statistics, which means either
working from the raw data or from studies related to what you are trying
to prove. Or state opinions as opinion rather than fact and accept that
some people may disagree with your opinion.
but not to spend days tagging C variables as indexing or
non-indexing and tracing them back from the machine code.
Doing that on the limited amount of code you have would be unlikely to
convince me since I don't think you have any code for avionics systems,
marine systems, radio systems or financial systems, just to name a few
area I've had some involvement in (I've not done much on marine or
radio) or the even larger number of types of SW around that I've had
nothing to do with.
As it is the
claim of "exactly no evidence" must be withdrawn.
Why? I stated clearly why it does not provide evidence for it and you
have not addressed my reasons for saying it does not.

Just to be clear, I've said the study you cited presented no evidence
for you claim, not that there is no evidence. For all I know there might
be vast amounts of evidence that would support your claim if you
presented it, but you have not. When I ask if someone has evidence to
support what they claim as fact I expect them to provide it not me.
But really you should
know that computers spend most of their time moving data about.
Most of my code is designed to only move data when it needs to, so most
of it spends only a small fraction of the time moving data. On the *one*
instance in over 20 years where this was provably false it was also
blindingly obvious that the wrong processor had been chosen, because the
only reason it had to do a lot of data moving was that it had a very
small amount of fast memory when it was trying to process very large
amounts of data so it had to keep moving data on to the fast memory. Oh,
there was one other example, but it was too slow so I changed it so that
it was moving vastly less data (and spending less time moving data than
on other tasks) and told the original designer why it should have been
done differently to reduce it even further.
Approximately one quarter of all mainframe cycles are used for sorting,
for example.
Again, where is your evidence of this? Also, where is your evidence that
most clock cycles are spent by a small number of mainframes rather than
orders of magnitude more PCs or the even larger number of embedded
processors? Also, since the number of cycles to fetch an integer (even
without indexing) is often far larger than the number of cycles for an
add (or even multiply on some processors) that would still not prove
your point that most integer operations are for indexing.

Taking any one example, or class of example, cannot prove what the
overall averages unless you also prove that the class is large enough to
overwhelm the sum of all the other classes.

Don't forget also that your initial challenge was because I said a
number of processors can do the indexing in addressing units so the size
of an int does not have to be determined by indexing calculations. Some
of these processors can even do the hopping around required to
efficiently implement an FFT using just the indexing hardware without
use of the arithmetic unit, so even after proving the point that most
integer arithmetic is for indexing (which I do not believe) you would
still have further work to do.
--
Flash Gordon
Mar 29 '07 #25

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

70
by: Roy Yao | last post by:
Does it mean "(sizeof(int))* (p)" or "sizeof( (int)(*p) )" ? According to my analysis, operator sizeof, (type) and * have the same precedence, and they combine from right to left. Then this...
1
by: Qin Chen | last post by:
I will present very long code, hope someone will read it all, and teach me something like tom_usenet. This question comes to me when i read <<Think in C++>> 2nd, chapter 10 , name control,...
31
by: Anjali M | last post by:
#include <stdio.h> int main() { int number = 10; printf("Number is: %d\n", number); printf("Sizeof of number is: %d\n", sizeof(number++)); printf("Number now is: %d\n", number);
7
by: dam_fool_2003 | last post by:
#include<stdio.h> int main(void) { unsigned int a=20,b=50, c = sizeof b+a; printf("%d\n",c); return 0; } out put: 24
12
by: ozbear | last post by:
If one were writing a C interpreter, is there anything in the standard standard that requires the sizeof operator to yield the same value for two different variables of the same type? Let's...
15
by: Alex Vinokur | last post by:
Why does one need to use two kinds of sizeof operator: * sizeof unary-expression, * sizeof (type-name) ? Their behavior seem not to be different (see an example below). ------ C++ code...
43
by: Richard | last post by:
Could someone point me to why "sizeof x" is/isnt preferable to "sizeof(x)",
5
by: asianmuscle | last post by:
I am trying to learn RAII and some template techniques by writer a smarter pointer class to manage the resource management. Since I find that a lot of the resource management is kinda the same, I...
28
by: Howard Bryce | last post by:
I have come across code containing things like sizeof int How come that one can invoke sizeof without any parentheses surrounding its argument? Is this admissible within the standard? Can it...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.