473,403 Members | 2,222 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,403 software developers and data experts.

sizeof...

Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!

Nov 14 '05 #1
37 1751
th************@hotmail.com wrote:
Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!


It depends on compiler - 32 bit compilers (GCC etc) returns 4 bytes per
integer and old 16 bit compilers (PacifiC, Turbo C) returns 2 bytes per
integer.

--
---------------------------------------------
www.cepa.one.pl
FreeBSD r0xi qrde ;-]
Nov 14 '05 #2
On 29 May 2005 07:02:12 -0700, "th************@hotmail.com"
<th************@hotmail.com> wrote:
Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!


DOS was made for the old 8086/88 processors. These were 16 bit
processors (thus : int being 2 bytes). On newer machines, the CPU runs
in a compatibility mode when running DOS (either real mode or V86
mode). It doesn't mean that the processor is suddenly a 16 bit
processor, it only acts like one.

So, in this case sizeof doesn't show the register size (which is 32
bits). Only the default operand size, which is 16 bits in this
compatibility mode.

Nov 14 '05 #3
th************@hotmail.com wrote on 29/05/05 :
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?


sizeof returns the number of byte occupied by a constant or an object.

Yes, an int can use 2 char on x86 real mode (PC/Ms-Dos) and 4 char on
x86 protected/extended mode (PC/Win32, Linux). It could also use 1 char
on a DSP TMS320C54 (Texas Instrument) where a char and and int are both
16-bit width.

--
Emmanuel
The C-FAQ: http://www.eskimo.com/~scs/C-faq/faq.html
The C-library: http://www.dinkumware.com/refxc.html

"Mal nommer les choses c'est ajouter du malheur au
monde." -- Albert Camus.

Nov 14 '05 #4
Paul Mesken <us*****@euronet.nl> writes:
On 29 May 2005 07:02:12 -0700, "th************@hotmail.com"
<th************@hotmail.com> wrote:
Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!


DOS was made for the old 8086/88 processors. These were 16 bit
processors (thus : int being 2 bytes). On newer machines, the CPU runs
in a compatibility mode when running DOS (either real mode or V86
mode). It doesn't mean that the processor is suddenly a 16 bit
processor, it only acts like one.

So, in this case sizeof doesn't show the register size (which is 32
bits). Only the default operand size, which is 16 bits in this
compatibility mode.


sizeof(int) simply yields the size, in bytes, of an int. It's up to
the compiler implementer to decide how big an int is going to be.
It's often whatever size will fit in a machine register, but that's
not required.

On the hardware level, an x86 machine (which I assume is what the OP
is using) can operate on 8-bit, 16-bit, or 32-bit quantities. The C
language specifies a range of integer types: char (at least 8 bits),
short (at least 16 bits), int (at least 16 bits), and long (at least
32 bits). (On some older machines, 32-bit operations might be more
difficult). As you can see, a compiler implementer has some
flexibility in choosing how to assign sizes to the various C types.
Typical choices are:
char 8 bits
short 16 bits
int 16 bits
long 32 bits
and
char 8 bits
short 16 bits
int 32 bits
long 32 bits

It happens that the former choice (16-bit int) is more convenient on
the older systems that DOS was designed for, and the latter (32-bit
int) is more convenient for the newer systems on which Linux typically
runs.

(I'm ignoring the type "long long", introduced in C99, which is at
least 64 bits.)

All this is only indirectly related to the "default operand size"; I'm
not even sure what that means.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #5

<th************@hotmail.com> wrote
Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!

All C objects are a multiple of sizeof(char), which is defined to be one.
Usually char is 8 bits, but not always.
int is designed to be the "natural" integer size of the machine. On some
machines it is not obvious what the natural integer size should be, and the
intermediate x86 chips are a case in point, because of the way the
instruction set allows registers to be treated as pairs.

In any case there is no absoloute guarantee that an int will fit in a
register. int must be at least 16 bits, and C compilers are available for
processors with 8 bit registers.
Nov 14 '05 #6
On Sun, 29 May 2005 21:12:16 GMT, Keith Thompson <ks***@mib.org>
wrote:
Paul Mesken <us*****@euronet.nl> writes:
On 29 May 2005 07:02:12 -0700, "th************@hotmail.com"
<th************@hotmail.com> wrote:
Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!
DOS was made for the old 8086/88 processors. These were 16 bit
processors (thus : int being 2 bytes). On newer machines, the CPU runs
in a compatibility mode when running DOS (either real mode or V86
mode). It doesn't mean that the processor is suddenly a 16 bit
processor, it only acts like one.

So, in this case sizeof doesn't show the register size (which is 32
bits). Only the default operand size, which is 16 bits in this
compatibility mode.


sizeof(int) simply yields the size, in bytes, of an int. It's up to
the compiler implementer to decide how big an int is going to be.
It's often whatever size will fit in a machine register, but that's
not required.

On the hardware level, an x86 machine (which I assume is what the OP
is using) can operate on 8-bit, 16-bit, or 32-bit quantities.


Yes, 16 bit and 32 bit operations can be mixed but one of them will
most certainly (there are some exceptions) incur a performance penalty
since it requires a size prefix. Which one (16 or 32) depends on what
the "default bit" of the code segment is. With an assembler, the
programmer sets the default operand size of the code segment by
instructions like USE16 or USE32.
The C
language specifies a range of integer types: char (at least 8 bits),
short (at least 16 bits), int (at least 16 bits), and long (at least
32 bits). (On some older machines, 32-bit operations might be more
difficult). As you can see, a compiler implementer has some
flexibility in choosing how to assign sizes to the various C types.
Typical choices are:
char 8 bits
short 16 bits
int 16 bits
long 32 bits
and
char 8 bits
short 16 bits
int 32 bits
long 32 bits

It happens that the former choice (16-bit int) is more convenient on
the older systems that DOS was designed for, and the latter (32-bit
int) is more convenient for the newer systems on which Linux typically
runs.

(I'm ignoring the type "long long", introduced in C99, which is at
least 64 bits.)

All this is only indirectly related to the "default operand size"; I'm
not even sure what that means.


Consider this :

MOV EAX, 0x01020304
MOV AX, 0x0102

These two instructions load an immediate value into a register.

The first one uses all 32 bits of the EAX register (the general
purpose registers of the x86 became 32 bits starting with the 386).

The second one uses all 16 bits of the AX register (which is really
the lower 16 bits of the EAX register).

BUT both have the same opcode : 0xb8.

This seems strange since the operations are really different : one is
16 bits, the other is 32 bits. The CPU _cannot_ distinguish between
the two different operations because they have the same opcode.

However : the CPU uses the "default bit" (or "D bit") of the code
segment to establish whether a 16 bit operand size is the default and,
thus, the lower 16 bits of EAX (aka AX) are used OR a 32 bit operand
is the default and, thus, all of the 32 bits of EAX are used.

One can, however, change this behaviour by using a size prefix (0x66)
so that the behaviour according to the D bit of the code segment is
inverted. Of course, the assembler might do this for the programmer
automatically based on whether "EAX" or "AX" is used in this example.

However, using such a prefix (and there are more such prefixes for
mixing 16 and 32 bit code) comes with a performance penalty.

Since DOS works in real or V86 mode in which 16 bit is the default
size (after all : it mimics the 8086/88), it makes perfect sense to
have int being 16 bits (for the 8086/88, there wasn't even an option
since the registers were only 16 bits wide, using 32 bits would be
dreadfully slow). Even though the Standard doesn't require it, int is
supposed to be the "quick" datatype.
Nov 14 '05 #7
On Mon, 30 May 2005 00:33:01 +0200, Paul Mesken <us*****@euronet.nl>
wrote in comp.lang.c:
On Sun, 29 May 2005 21:12:16 GMT, Keith Thompson <ks***@mib.org>
wrote:
Paul Mesken <us*****@euronet.nl> writes:
On 29 May 2005 07:02:12 -0700, "th************@hotmail.com"
<th************@hotmail.com> wrote:

Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!

DOS was made for the old 8086/88 processors. These were 16 bit
processors (thus : int being 2 bytes). On newer machines, the CPU runs
in a compatibility mode when running DOS (either real mode or V86
mode). It doesn't mean that the processor is suddenly a 16 bit
processor, it only acts like one.

So, in this case sizeof doesn't show the register size (which is 32
bits). Only the default operand size, which is 16 bits in this
compatibility mode.


sizeof(int) simply yields the size, in bytes, of an int. It's up to
the compiler implementer to decide how big an int is going to be.
It's often whatever size will fit in a machine register, but that's
not required.

On the hardware level, an x86 machine (which I assume is what the OP
is using) can operate on 8-bit, 16-bit, or 32-bit quantities.


Yes, 16 bit and 32 bit operations can be mixed but one of them will
most certainly (there are some exceptions) incur a performance penalty
since it requires a size prefix. Which one (16 or 32) depends on what
the "default bit" of the code segment is. With an assembler, the
programmer sets the default operand size of the code segment by
instructions like USE16 or USE32.


What does assembly language or modes of the Intel processor have to do
with C? Caring about whether or not there is a performance penalty
for one thing or another thing is off-topic here, is a micro
optimization, and has nothing to do with the C language, which does
not specify anything at all about the speed or efficiency of anything.
The C
language specifies a range of integer types: char (at least 8 bits),
short (at least 16 bits), int (at least 16 bits), and long (at least
32 bits). (On some older machines, 32-bit operations might be more
difficult). As you can see, a compiler implementer has some
flexibility in choosing how to assign sizes to the various C types.
Typical choices are:
char 8 bits
short 16 bits
int 16 bits
long 32 bits
and
char 8 bits
short 16 bits
int 32 bits
long 32 bits

It happens that the former choice (16-bit int) is more convenient on
the older systems that DOS was designed for, and the latter (32-bit
int) is more convenient for the newer systems on which Linux typically
runs.

(I'm ignoring the type "long long", introduced in C99, which is at
least 64 bits.)

All this is only indirectly related to the "default operand size"; I'm
not even sure what that means.


Consider this :

MOV EAX, 0x01020304
MOV AX, 0x0102

[snip much extremely off-topic text]

If you want to expound about the bizarre quirks of the Frankenstein
patchwork Intel X86 architecture, I'd suggest you do so in
news:comp.lang.asm.x86, where it is appreciated and topical.

It is neither of those things here.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Nov 14 '05 #8
On Sun, 29 May 2005 18:31:15 -0500, Jack Klein <ja*******@spamcop.net>
wrote:
Caring about whether or not there is a performance penalty
for one thing or another thing is off-topic here, is a micro
optimization, and has nothing to do with the C language, which does
not specify anything at all about the speed or efficiency of anything.
Nownownow Jack, that's not the right attitude for a C programmer ;-)

Don't you think our non-Assembly programming C brothers/sisters have a
right to know that a division is typically slower than an addition,
for example? Or is such a statement off topic here because the
Standard doesn't mention this? (even though it is true in the real
world).

If speed and efficiency are of no concern then there are better
languages than C. C is not the only portable language (how about
Java?).

Isn't it so that C is a "Language of Choice" because it's "close to
the metal"? Computer architecture _is_ that "metal". A lot of
decisions made for the implementation of the compiler make more sense
(like the difference in sizeof(int) the OP experienced) when the
underlying computer architecture (which the compiler targets) is
explained.

Of course, we could deny the fact that programs written in C are
actually meant to run on a computer. But this would reduce discussion
to a completely academical one, interesting only to mathematicians and
linguists and devoid of practical experience. We would just be quoting
the Standard all the time.
If you want to expound about the bizarre quirks of the Frankenstein
patchwork Intel X86 architecture, I'd suggest you do so in
news:comp.lang.asm.x86, where it is appreciated and topical.


Well, the question wasn't asked there and even though it's obvious
that you want me to save that group, I'm a bit rusty (from too much
C/C++/SQL programming) and afraid that Terje would come up with
solutions quicker than my own ;-)

Nov 14 '05 #9
Paul Mesken wrote:
On Sun, 29 May 2005 18:31:15 -0500, Jack Klein <ja*******@spamcop.net>
wrote:

Caring about whether or not there is a performance penalty
for one thing or another thing is off-topic here, is a micro
optimization, and has nothing to do with the C language, which does
not specify anything at all about the speed or efficiency of anything.

Nownownow Jack, that's not the right attitude for a C programmer ;-)


Actually it is.
Don't you think our non-Assembly programming C brothers/sisters have a
right to know that a division is typically slower than an addition,
for example? Or is such a statement off topic here because the
Standard doesn't mention this? (even though it is true in the real
world).
All of which would be relevant if `implementation' were part of the name
of this newsgroup -- or if the standard set out any particular
requirements about how constructs behave.
If speed and efficiency are of no concern then there are better
languages than C. C is not the only portable language (how about
Java?).
All of which would be relevant if `advocacy' were part of the title of
this newsgroup.
Isn't it so that C is a "Language of Choice" because it's "close to
the metal"? Computer architecture _is_ that "metal". A lot of
decisions made for the implementation of the compiler make more sense
(like the difference in sizeof(int) the OP experienced) when the
underlying computer architecture (which the compiler targets) is
explained.
No, it's just an area where implementations are free to make certain
choices, according to the standard.
Of course, we could deny the fact that programs written in C are
actually meant to run on a computer. But this would reduce discussion
to a completely academical one, interesting only to mathematicians and
linguists and devoid of practical experience. We would just be quoting
the Standard all the time.
Programs written in C are designed to run on the virtual machine that
the language definition provides. The underlying platform could just as
easily be a computer or a flock of cooperative pigeons.

Topicality is important, serving two purposes: Maintaining the
signal/noise ratio at an approriate level (lest those who *can* answer
questions bail) and limiting the subject matter in order that answers
can be properly vetted by the community.

Cheers,
--ag

--
Artie Gold -- Austin, Texas
http://it-matters.blogspot.com (new post 12/5)
http://www.cafepress.com/goldsays
Nov 14 '05 #10
Paul Mesken wrote:
Jack Klein <ja*******@spamcop.net> wrote:

Caring about whether or not there is a performance penalty
for one thing or another thing is off-topic here, is a micro
optimization, and has nothing to do with the C language, which
does not specify anything at all about the speed or efficiency
of anything.
Nownownow Jack, that's not the right attitude for a C programmer ;-)


But it's the right attitude for clc. You can introduce specific
architectures if you want to cite examples of how different
implementations will yield different results. But readers should
not be encouraged to think of C in terms writing code for one (or
two) specific architectures.
Don't you think our non-Assembly programming C brothers/sisters
have a right to know that a division is typically slower than an
addition, for example?
Yes, but I don't want to see discussions of say opcodes and actual
cycle/latency times on an x86.
Or is such a statement off topic here because the Standard doesn't
mention this? (even though it is true in the real world).
The critical point is that what the standard(s) do say in terms of
language semantics are the priority.
...
A lot of decisions made for the implementation of the compiler make
more sense (like the difference in sizeof(int) the OP experienced)
when the underlying computer architecture (which the compiler
targets) is explained.
CLC is not here to discuss implementations and their internals.
There are plenty of other groups that discus that. CLC is here to
discus C programming, i.e. writing C code that is portable to all
implementations.

It is a natural human (or at least programmer) tendancy to want to
disect C code in terms of disassemblies and working out what the
compiler actually does with source code. However, it has been my
long experience that this only coerces students of C (and other
languages) into writing architecture specific code. The consequences
of this attitude can and often does come back to haunt students.

Unlearning acquired bad habits is difficult and time consuming.
Of course, we could deny the fact that programs written in C are
actually meant to run on a computer.
Who is denying this? C's main goal was to implement Unix and
associated tools. You don't get much more practical than that.
But this would reduce discussion to a completely academical one,
interesting only to mathematicians and linguists and devoid of
practical experience.
CLC is an esoteric group in that it does focus exclusively on the
application of the language definition, without trying to motivate
readers towards specific architecture specific trends.

How does it harm students to understand the language abstraction?
We would just be quoting the Standard all the time.


And what is wrong with that? Too few C programmers sit down to
actually analyse the language they are employing. This just
promotes a 'near enough is good enough' approach. With a language
like C, this is a dangerous attitude.

Time and time again you will see people questioning why certain
code works on one machine but not on another. The reason is invariably
because they have been using (and been encouraged to use) Machine-X C,
rather than learn the portable aspects of the core C language itself.

If you want language efficiencies on specific architectures, then
there are no shortage of groups that can help. On the other hand,
discussing strictly conforming code has pretty much only one group,
namely clc (and language/moderated) derivatives.

--
Peter

Nov 14 '05 #11
>All C objects are a multiple of sizeof(char), which is defined to be one.
Usually char is 8 bits, but not always.


Can you please tell where/what are the diffrent sizes of char?I am new
to C and I haven't seen any compiler where size of char is diffrent
from 8 .Please specify.

Nov 14 '05 #12
"Prashant Mahajan" <mr***************@gmail.com> writes:
All C objects are a multiple of sizeof(char), which is defined to be one.
Usually char is 8 bits, but not always.


Can you please tell where/what are the diffrent sizes of char?I am new
to C and I haven't seen any compiler where size of char is diffrent
from 8 .Please specify.


I've never heard of a hosted implementation of C with CHAR_BIT != 8.
The need to exchange octet-oriented data with other systems strongly
encourages implementers to use 8-bit chars even on systems that can't
directly access 8-bit quantities. (Search for references to "Cray
vector" in this newsgroup for one example.)

Values of CHAR_BIT other than 8 are most common on DSPs (Digital
Signal Processors). These are special-purpose systems that process
data 16 or 32 bits at a time.

Some older systems have had 9-bit bytes, or word sizes such as 36 or
60 bits, but most or all of them probably predate the ANSI C standard;
powers of two have pretty much taken over in recent decades.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #13
Prashant Mahajan wrote:
All C objects are a multiple of sizeof(char), which is defined
to be one. Usually char is 8 bits, but not always. ^^^^^^^

^^^^^^^^^

Can you please tell where/what are the diffrent sizes of char?I am
new to C and I haven't seen any compiler where size of char is
diffrent from 8 .Please specify.


He did. There is no system where sizeof(char) is not one.

--
Some informative links:
news:news.announce.newusers
http://www.geocities.com/nnqweb/
http://www.catb.org/~esr/faqs/smart-questions.html
http://www.caliburn.nl/topposting.html
http://www.netmeister.org/news/learn2quote.html
Nov 14 '05 #14
CBFalconer <cb********@yahoo.com> writes:
Prashant Mahajan wrote:
All C objects are a multiple of sizeof(char), which is defined
to be one. Usually char is 8 bits, but not always. ^^^^^^^

^^^^^^^^^

Can you please tell where/what are the diffrent sizes of char?I am
new to C and I haven't seen any compiler where size of char is
diffrent from 8 .Please specify.


He did. There is no system where sizeof(char) is not one.


As long as we're being pedantic, why do you assume that the phrase
"size of" refers to the C "sizeof" operator?

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #15
On Mon, 30 May 2005 02:04:29 +0200, in comp.lang.c , Paul Mesken
<us*****@euronet.nl> wrote:
On Sun, 29 May 2005 18:31:15 -0500, Jack Klein <ja*******@spamcop.net>
wrote:
Caring about whether or not there is a performance penalty
for one thing or another thing is off-topic here, is a micro
optimization, and has nothing to do with the C language, which does
not specify anything at all about the speed or efficiency of anything.
Nownownow Jack, that's not the right attitude for a C programmer ;-)


Smiley noted, but unless you're planning becoming a troll, you need to
understand the topic of this group a tad better.
Don't you think our non-Assembly programming C brothers/sisters
Whatever. This is not comp.lang.x86.assembler or whatever.
Or is such a statement off topic here because the
Standard doesn't mention this?
Yes.
(even though it is true in the real world).
And you have proof positive of this for all concievable hardware?
Including specialist math hardware designed to do divisions
ultra-fast?
If speed and efficiency are of no concern then there are better
languages than C. C is not the only portable language (how about
Java?).


Its either an island or a coffee. The latter is semitopical here, the
latter not.

(snip rest of foolish troll)

--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>

----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Nov 14 '05 #16
"Paul Mesken" <us*****@euronet.nl> wrote
Caring about whether or not there is a performance penalty
for one thing or another thing is off-topic here, is a micro
optimization, and has nothing to do with the C language, which does
not specify anything at all about the speed or efficiency of anything.
Nownownow Jack, that's not the right attitude for a C programmer ;-)

Don't you think our non-Assembly programming C brothers/sisters have a
right to know that a division is typically slower than an addition,
for example? Or is such a statement off topic here because the
Standard doesn't mention this? (even though it is true in the real
world).

I used to spend a lot of time back cracking multiplications
( replacing x *= 9 with x = (x << 3) + x; )
Now I hardly ever do. The real world changes.
If speed and efficiency are of no concern then there are better
languages than C. C is not the only portable language (how about
Java?).
Efficiency is certainly a reason for using C. It is not the only one. When
the C++ standard template library came out I saw some figures that made a
very convincing case that the classes were more efficient than corresponding
C constructs. I did seriously consider doing everything in STL, but decided
against it, largely because the syntax made it too difficult to integrate
modules.
Isn't it so that C is a "Language of Choice" because it's "close to
the metal"? Computer architecture _is_ that "metal". A lot of
decisions made for the implementation of the compiler make more sense
(like the difference in sizeof(int) the OP experienced) when the
underlying computer architecture (which the compiler targets) is
explained.
You do need to understand how a computer works to be a good programmer,
irrespective of language. However you shouldn't have to understand what your
particular architecure is. I have a UNIX box to run heavy-duty programs on.
I don't actually know some basic things such as how many bits a pointer
takes up, what the processor is, what the size of long is. I don't need to.
I just give it C programs and it spits back the results.
Of course, we could deny the fact that programs written in C are
actually meant to run on a computer. But this would reduce discussion
to a completely academical one, interesting only to mathematicians and
linguists and devoid of practical experience. We would just be quoting
the Standard all the time.
You need both theoretical and practical approaches. It is nice to know that
a ZX81 can emulate a Cray given a large enough supply of tapes, though no
one would ever try to do this. In reality the whole world runs on Microsoft
products, but it is nice to pretend sometimes that Mr Gates is an obscure
software vendor whose operating system we have only vaguely heard of. Why?
Not because every single regular on this ng hasn't written a program to run
under MS at some stage, but because you need to separate the language from
the implementation to improve the structure of your programs.

Nov 14 '05 #17

Thank you for all your replies!!!
Malcolm wrote:
"Paul Mesken" <us*****@euronet.nl> wrote
Caring about whether or not there is a performance penalty
for one thing or another thing is off-topic here, is a micro
optimization, and has nothing to do with the C language, which does
not specify anything at all about the speed or efficiency of anything.


Nownownow Jack, that's not the right attitude for a C programmer ;-)

Don't you think our non-Assembly programming C brothers/sisters have a
right to know that a division is typically slower than an addition,
for example? Or is such a statement off topic here because the
Standard doesn't mention this? (even though it is true in the real
world).

I used to spend a lot of time back cracking multiplications
( replacing x *= 9 with x = (x << 3) + x; )
Now I hardly ever do. The real world changes.

If speed and efficiency are of no concern then there are better
languages than C. C is not the only portable language (how about
Java?).

Efficiency is certainly a reason for using C. It is not the only one. When
the C++ standard template library came out I saw some figures that made a
very convincing case that the classes were more efficient than corresponding
C constructs. I did seriously consider doing everything in STL, but decided
against it, largely because the syntax made it too difficult to integrate
modules.

Isn't it so that C is a "Language of Choice" because it's "close to
the metal"? Computer architecture _is_ that "metal". A lot of
decisions made for the implementation of the compiler make more sense
(like the difference in sizeof(int) the OP experienced) when the
underlying computer architecture (which the compiler targets) is
explained.

You do need to understand how a computer works to be a good programmer,
irrespective of language. However you shouldn't have to understand what your
particular architecure is. I have a UNIX box to run heavy-duty programs on.
I don't actually know some basic things such as how many bits a pointer
takes up, what the processor is, what the size of long is. I don't need to.
I just give it C programs and it spits back the results.

Of course, we could deny the fact that programs written in C are
actually meant to run on a computer. But this would reduce discussion
to a completely academical one, interesting only to mathematicians and
linguists and devoid of practical experience. We would just be quoting
the Standard all the time.

You need both theoretical and practical approaches. It is nice to know that
a ZX81 can emulate a Cray given a large enough supply of tapes, though no
one would ever try to do this. In reality the whole world runs on Microsoft
products, but it is nice to pretend sometimes that Mr Gates is an obscure
software vendor whose operating system we have only vaguely heard of. Why?
Not because every single regular on this ng hasn't written a program to run
under MS at some stage, but because you need to separate the language from
the implementation to improve the structure of your programs.


Nov 14 '05 #18
On Mon, 30 May 2005 15:12:45 +0000 (UTC), "Malcolm"
<re*******@btinternet.com> wrote:
I used to spend a lot of time back cracking multiplications
( replacing x *= 9 with x = (x << 3) + x; )
Now I hardly ever do. The real world changes.


That's a good point. Multiplications aren't as slow as they once used
to be (although your example is still faster than the multiplication
on a P3 or P4, but that's besides the point).

In general, I think that a single operation should not be replaced by
two or more operations which just happen to be faster at the
_present_. It also tends to hurt readability.

On the other hand, replacing one operation with another one can be
quite helpfull.

For example :

a = b % 16;

a = b & 15;

(for some unsigned int b)

Of course, the compiler should optimize such simple constructs but I
have too much experience with compilers that don't do obvious
optimizations to have any trust in them :-)

I think it's perfectly safe to assume that multiplications, divisions
and modulus operations will never be faster than the boolean
operations (and, or, not), additions and subtractions.

Having said that : I believe that readability (and, thus,
maintainability) is the most important factor for the bulk of the
code. If a program needs optimization for speed (and that optimization
is needed in the code, not in the hardware) then, typically, only a
very small part of the program needs that optimization (the "90% of
the time, 10% of the code is executed" thing :-)

Nov 14 '05 #19
On Mon, 30 May 2005 12:13:03 +0100, Mark McIntyre
<ma**********@spamcop.net> wrote:
And you have proof positive of this for all concievable hardware?
Including specialist math hardware designed to do divisions
ultra-fast?


Let's do it somewhat simpler : _you_ come up with a _single_ piece of
hardware that does divisions quicker than additions.

I would be mightily impressed.

Nov 14 '05 #20
On Mon, 30 May 2005 06:07:36 GMT, Keith Thompson <ks***@mib.org> wrote
in comp.lang.c:
CBFalconer <cb********@yahoo.com> writes:
Prashant Mahajan wrote:

All C objects are a multiple of sizeof(char), which is defined
to be one. Usually char is 8 bits, but not always. ^^^^^^^
^^^^^^^^^

Can you please tell where/what are the diffrent sizes of char?I am
new to C and I haven't seen any compiler where size of char is
diffrent from 8 .Please specify.


He did. There is no system where sizeof(char) is not one.


As long as we're being pedantic, why do you assume that the phrase
"size of" refers to the C "sizeof" operator?


Perhaps from the C standard?

6.5.3.4 The sizeof operator (paragraph 2)

"The sizeof operator yields the size (in bytes) of its operand"

In C, there is no way to know the size of an object or type without
using the sizeof operator, with one exception.

That exception, of course, being the character types, for which sizeof
yields a value of 1, and which occupy 1 byte, by definition.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Nov 14 '05 #21
On Mon, 30 May 2005 18:02:43 +0200, in comp.lang.c , Paul Mesken
<us*****@euronet.nl> wrote:
On Mon, 30 May 2005 12:13:03 +0100, Mark McIntyre
<ma**********@spamcop.net> wrote:
And you have proof positive of this for all concievable hardware?
Including specialist math hardware designed to do divisions
ultra-fast?
Let's do it somewhat simpler : _you_ come up with a _single_ piece of
hardware that does divisions quicker than additions.


I have no need to. I'm making no claims at all. You're the one
asserting its slower, and using this as the basis for some requirement
to understand assembler or hardware or something.
I would be mightily impressed.


And I care because?
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>
Nov 14 '05 #22
On Tue, 31 May 2005 00:03:07 +0100, Mark McIntyre
<ma**********@spamcop.net> wrote:
On Mon, 30 May 2005 18:02:43 +0200, in comp.lang.c , Paul Mesken
<us*****@euronet.nl> wrote:
On Mon, 30 May 2005 12:13:03 +0100, Mark McIntyre
<ma**********@spamcop.net> wrote:
And you have proof positive of this for all concievable hardware?
Including specialist math hardware designed to do divisions
ultra-fast?


Let's do it somewhat simpler : _you_ come up with a _single_ piece of
hardware that does divisions quicker than additions.


I have no need to. I'm making no claims at all. You're the one
asserting its slower, and using this as the basis for some requirement
to understand assembler or hardware or something.


It's completely logical that a division will always be slower than an
addition, given the algorithms that are used to implement it.

Here's an overview :

http://www.ecst.csuchico.edu/~julian...97division.pdf

And as for proof that divisions are slower than additions on all
conceivable hardware :

Of course, I don't have such proof since I don't know all conceivable
hardware (who knows? some quantum computer).

But that doesn't disproof my assertion. Everyone with a tiny bit of
knowledge about how divisions and additions are implemented in
hardware will realize that additions are quicker (implementing an
addition with boolean primitives is about the very first excercise one
does when learning hardware implementation).

It's simply logical.

Of course, you can choose to ignore this and program as if divisions
are just as quick as additions.

Personally, I'll never choose ignorance over knowledge. Ignorance is a
flaw, not a feature.
I would be mightily impressed.


And I care because?


You have to ask that to me?
Nov 14 '05 #23
On Mon, 30 May 2005 02:04:29 +0200, Paul Mesken <us*****@euronet.nl>
wrote in comp.lang.c:
On Sun, 29 May 2005 18:31:15 -0500, Jack Klein <ja*******@spamcop.net>
wrote:
Caring about whether or not there is a performance penalty
for one thing or another thing is off-topic here, is a micro
optimization, and has nothing to do with the C language, which does
not specify anything at all about the speed or efficiency of anything.

[snip]
If speed and efficiency are of no concern then there are better
languages than C. C is not the only portable language (how about
Java?).


Java? Portable? You have just disqualified yourself from any serious
discussion of C on various architectures. Such as the 98% of
processors and controllers manufactured in the world today that are
not used as the main CPU in desktop/laptop/server applications.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Nov 14 '05 #24
On Tue, 31 May 2005 01:11:56 -0500, Jack Klein <ja*******@spamcop.net>
wrote:
On Mon, 30 May 2005 02:04:29 +0200, Paul Mesken <us*****@euronet.nl>
wrote in comp.lang.c:

If speed and efficiency are of no concern then there are better
languages than C. C is not the only portable language (how about
Java?).


Java? Portable? You have just disqualified yourself from any serious
discussion of C on various architectures.


Well, lucky me then that such discussions aren't topical here ;-)

Nov 14 '05 #25

"Jack Klein" <ja*******@spamcop.net> wrote

In C, there is no way to know the size of an object or type without
using the sizeof operator, with one exception.

size_t sizeof(int)
{
int *mem = malloc(1024);
unsigned char *ptr = (unsigned char *) mem;
size_t answer = 0;
memset(mem, 0, 1024);
*mem = ~0;
while(ptr[answer] != 0)
answer++;

return answer;
}

(OK maybe we've got padding bytes. So write to mem[1] and check.)
Nov 14 '05 #26
In article <ir********************************@4ax.com>,
Jack Klein <ja*******@spamcop.net> wrote:

In C, there is no way to know the size of an object or type without
using the sizeof operator, with one exception.


look at the declaration for the object and figure it out (human error possible)

offsetof() the type's sucessor in a struct (padding may add to apparent size)

pointer differences in an array of the type (padding)

set last (subpart) to a known value and use memchr to search for it
(begs the question of the size of the last subpart, but that should
be smaller and simpler and not be turtles all the way down)
--
7842++
Nov 14 '05 #27
Malcolm wrote:

"Jack Klein" <ja*******@spamcop.net> wrote

In C, there is no way to know the size of an object or type without
using the sizeof operator, with one exception.

size_t sizeof(int)
{
int *mem = malloc(1024);
unsigned char *ptr = (unsigned char *) mem;
size_t answer = 0;
memset(mem, 0, 1024);
*mem = ~0;
while(ptr[answer] != 0)
answer++;

return answer;
}

(OK maybe we've got padding bytes. So write to mem[1] and check.)


sizeof(int) isn't limited to 1024.

~0 is equal to zero on one's complement machines.
Assignment is by value.

--
pete
Nov 14 '05 #28

"Paul Mesken" <us*****@euronet.nl> wrote

But that doesn't disproof my assertion. Everyone with a tiny bit of
knowledge about how divisions and additions are implemented in
hardware will realize that additions are quicker (implementing an
addition with boolean primitives is about the very first excercise one
does when learning hardware implementation).

But replacing a division with an addition is a micro-optimisation. Only
rarely will it make any real difference to a program's running time.
As for the single counterexample, consider this.

I've got a weighing factor I've got to apply to some variables. If I didn't
have to worry about running time at all I would implement it like this.

double weight = loadweightfactor();
....
x[i] /= weight;

However let's say that the array x consists of integers, and the weight is
usually 5/16. (ie 3.2)

So let's optimise our code

if( weight == 5/16)
{
/* in loop */

x[i] = ((x[i] << 2) + x[i]) >> 4;
y += strlen(str);
}

So we've got rid of our division and replaced it with a addition and two
shifts.

However this thing is running on a Pentium. It is not entirely impossible
that the floating point pipeline is idle, whilst the main pipeline is
overloaded. So what we have done is replaced a free instruction with several
instructions that actually cost microseconds.

So it isn't always the case that replacing a division with an addition will
speed things up.
Nov 14 '05 #29
On Tue, 31 May 2005 01:42:19 +0200, in comp.lang.c , Paul Mesken
<us*****@euronet.nl> wrote:
It's completely logical that a division will always be slower than an
addition, given the algorithms that are used to implement it.
And I say again, you can prove this to be completely true, for all
hardware, including specially optimised hardware? Imagine a system
that does addition on an 8080 and division on a Pentium. Which
operation is faster?

This may sound daft. But its perfectly possible to build hardware with
a FP unit which runs at higher clock speeds than the main cpu (in fact
it probably makes sense, FP operations often take more clock cycles).
So I can easily envisage hardware on which addition and division were
of comparable speeds, or better.

Or consider for instance floating point addition, and integer division
by two. Which is faster?
Of course, I don't have such proof since I don't know all conceivable
hardware
Indeed. But no need to invoke magic. Today's hardware could do it.But that doesn't disproof my assertion.
The thing about assertions is, to make them into facts, you have to
prove them to be true.
Everyone with a tiny bit of knowledge
I'm sure we all remember what they say about a little knowledge.
Of course, you can choose to ignore this and program as if divisions
are just as quick as additions.
Or you can program as if thats something you should not be worrying
about yet. Do you recall the three laws of optimisation?
Personally, I'll never choose ignorance over knowledge. Ignorance is a
flaw, not a feature.


And arrogance about ones knowledge is a bigger one, IMHO. YMMV.
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>
Nov 14 '05 #30
On Tue, 31 May 2005 22:06:52 +0000 (UTC), "Malcolm"
<re*******@btinternet.com> wrote:

"Paul Mesken" <us*****@euronet.nl> wrote

But that doesn't disproof my assertion. Everyone with a tiny bit of
knowledge about how divisions and additions are implemented in
hardware will realize that additions are quicker (implementing an
addition with boolean primitives is about the very first excercise one
does when learning hardware implementation).
But replacing a division with an addition is a micro-optimisation. Only
rarely will it make any real difference to a program's running time.


This is true. The first optimization should, of course, be the
algorithm itself. However, there might be cases in which micro
optimizations make a big difference. In such a case it is handy to
know that not all arithmetical operations are created equally,
performance wise.

As things are now (in general) :

+ - & | ~ ^ >> << are the quick ones
* is mediocre
/ % are the slow ones

Also, I didn't say that divisions should be replaced by additions
(although, if it can be done in an obvious way, like dividing by a
half, then by all means one should). I said that divisions are slower
than additions.
As for the single counterexample, consider this.

I've got a weighing factor I've got to apply to some variables. If I didn't
have to worry about running time at all I would implement it like this.

double weight = loadweightfactor();
...
x[i] /= weight;

However let's say that the array x consists of integers, and the weight is
usually 5/16. (ie 3.2)

So let's optimise our code

if( weight == 5/16)
{
/* in loop */

x[i] = ((x[i] << 2) + x[i]) >> 4;
y += strlen(str);
}
Note that I, in another post in this thread, opposed to this kind of
optimization :

"In general, I think that a single operation should not be replaced by
two or more operations which just happen to be faster at the
_present_. It also tends to hurt readability.".

Also, using an "if" as an optimization is hardly an optimization in
pipelined architectures.

For example :

Nowadays' x86s suffer a lot from penalties by mispredicted branches
and the static branch prediction will favor a fall through in your
optimization (and in most cases it actually will _not_ fall through so
static branch prediction will mispredict most of the time).

Even if the branch is predicted correctly most of the time (in some
loop with dynamic branch prediction, for example) there's still some
overhead involved by using if's. And then there is the condition
itself as well, of course, which is an extra operation.

But this single x[i] /= weight; operation can be replaced by another
single operation.

I would use the simple (and still readable) optimization of using a
multiplication instead of a division. So the function
loadweightfactor() should not return something like 5/16 but 16/5 and
x[i] will not be divided by weight but multiplied by it.

So :

double weight = loadweightfactor();
x[i] *= weight;

Of course, assuming that this course of action will not cause
performance penalties in the function loadweightfactor().

As a side note :

x[i] = ((x[i] << 2) + x[i]) >> 4;

is not equivalent to :

x[i] /= 5/16;

But the intention was clear :-)
So we've got rid of our division and replaced it with a addition and two
shifts.


And an if statement, that's the expensive one :-)

As I said : one typically shouldn't replace a single operation by 2 or
more (in your example, it was replaced by 4, probably even more
operations in the generated machinecode). It will hurt readability and
it might not be much of an optimization in the future.

Nov 14 '05 #31
On Tue, 31 May 2005 23:47:40 +0100, Mark McIntyre
<ma**********@spamcop.net> wrote:
On Tue, 31 May 2005 01:42:19 +0200, in comp.lang.c , Paul Mesken
<us*****@euronet.nl> wrote:
It's completely logical that a division will always be slower than an
addition, given the algorithms that are used to implement it.
And I say again, you can prove this to be completely true, for all
hardware, including specially optimised hardware? Imagine a system
that does addition on an 8080 and division on a Pentium. Which
operation is faster?


Well, if the 8080 is wildly overclocked (Z80s, which is code
compatible with the 8080, can go as fast as 32 MHz, last time I
checked) and the division is one by zero on a Pentium then the 8080
might still be faster ;-)

Also, remember (I hope) that a Pentium works with a relatively long
pipeline and there will be a distinct difference between throughput
and latency.

My statement doesn't provide for such hypothetical systems. I work
with real systems and I'm confident enough (armed with the knowledge
about how divisions and additions are carried out in hardware) to
state that divisions will not be quicker than additions.

I will grant you that my statements are not universally valid (what
statement is?).
This may sound daft.
Agreed :-)
But its perfectly possible to build hardware with
a FP unit which runs at higher clock speeds than the main cpu (in fact
it probably makes sense, FP operations often take more clock cycles).
So I can easily envisage hardware on which addition and division were
of comparable speeds, or better.
I can envision such hardware as well. But I haven't actually _seen_
such hardware.

You're trying to undermine a solid programming practice by coming up
with counter examples that rely on hypothetical machines. I dare to
say that most programmers don't program for hypothetical machines but
real ones.
Or consider for instance floating point addition, and integer division
by two. Which is faster?
If the integer division is replaced by a single shift to the right
then it might be very close. But a shift is not a division.
Of course, I don't have such proof since I don't know all conceivable
hardware


Indeed. But no need to invoke magic. Today's hardware could do it.
But that doesn't disproof my assertion.


The thing about assertions is, to make them into facts, you have to
prove them to be true.


Take Newton's F = M * A

It's good enough for "daily use" even though QM and relativety theory
have shown it to be not precise enough for the very fast or the very
small.

But that doesn't make it less helpfull, for "daily use".

On most real machines my statement about divisions and additions will
hold, but there might be some exceptions (although I believe most such
exceptions to be hypothetical).
Or you can program as if thats something you should not be worrying
about yet. Do you recall the three laws of optimisation?


There are many such sets but you're probably refering to the one which
starts with "law #1 Don't do it" :-)
Nov 14 '05 #32
Paul Mesken wrote:

On Tue, 31 May 2005 23:47:40 +0100, Mark McIntyre
<ma**********@spamcop.net> wrote:

Which operation is faster?


Well, if the 8080 is wildly overclocked


I think what Mark McIntyre meant, is that you're off topic.
Which code is faster,
isn't part of the definition of the language.

--
pete
Nov 14 '05 #33
"Malcolm" <re*******@btinternet.com> writes:
"Jack Klein" <ja*******@spamcop.net> wrote

In C, there is no way to know the size of an object or type without
using the sizeof operator, with one exception.

size_t sizeof(int)
{
int *mem = malloc(1024);
unsigned char *ptr = (unsigned char *) mem;
size_t answer = 0;
memset(mem, 0, 1024);
*mem = ~0;
while(ptr[answer] != 0)
answer++;

return answer;
}

(OK maybe we've got padding bytes. So write to mem[1] and check.)


<quibble>
If your C compiler allows you to name a function "sizeof", send it
back.
</quibble>

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #34
On Wed, 01 Jun 2005 01:46:28 +0200, Paul Mesken <us*****@euronet.nl>
wrote:
Nowadays' x86s suffer a lot from penalties by mispredicted branches
and the static branch prediction will favor a fall through in your
optimization (and in most cases it actually will _not_ fall through so
static branch prediction will mispredict most of the time).


I must have been half asleep. Of course, your code example _will_ fall
through most of the time. But then again : the code example still
needs an "else" and that will result in possible two branches.

Nov 14 '05 #35
On Wed, 01 Jun 2005 02:11:58 +0200, in comp.lang.c , Paul Mesken
<us*****@euronet.nl> wrote:
On Tue, 31 May 2005 23:47:40 +0100, Mark McIntyre
<ma**********@spamcop.net> wrote:
But its perfectly possible to build hardware with
a FP unit which runs at higher clock speeds than the main cpu
I can envision such hardware as well. But I haven't actually _seen_
such hardware.


The point is, you're making generalisations which aren't true, and you
don't even bother to caveat them. You're misleading people. I dislike
that.
You're trying to undermine a solid programming practice
but its /not/ solid programming practice! Leave the microoptimisations
to the compiler writer, the hardware and the OS, until you actually
know you need to optimise. Crivens, this is practically a Law. In
fact, its three Laws.
Or consider for instance floating point addition, and integer division
by two. Which is faster?


If the integer division is replaced by a single shift to the right
then it might be very close. But a shift is not a division.


If you're allowed to move the goalposts by excluding some classes of
operation which inconveniently (for you) can be optimised in hardware
or software to act as counterexamples, then you can of course prove
your theorem. Where I come from, thats called "cheating"...
Take Newton's F = M * A
Why? Is it relevant to whether division is invariably slower than
addition?
On most real machines my statement about divisions and additions will
hold,


Again, you make an unfounded assertion which isn't even generally
true. I've given a real-world counterexample but you deny its
existence. Thats ok, I';m sure others will have spotted by now that
you're in a hole.
Or you can program as if thats something you should not be worrying
about yet. Do you recall the three laws of optimisation?


There are many such sets but you're probably refering to the one which
starts with "law #1 Don't do it" :-)


Thats the set. You remember the 2nd and 3rd laws?

--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>
Nov 14 '05 #36
On Wed, 01 Jun 2005 00:59:15 GMT, in comp.lang.c , pete
<pf*****@mindspring.com> wrote:
Paul Mesken wrote:

On Tue, 31 May 2005 23:47:40 +0100, Mark McIntyre
<ma**********@spamcop.net> wrote:
> Which operation is faster?


Well, if the 8080 is wildly overclocked


I think what Mark McIntyre meant, is that you're off topic.


Thats one of the things I meant.
Which code is faster, isn't part of the definition of the language.


Absolutely. Asserting something about which code is faster is wildly
offtopic in CLC, even if correct. Which it isn't. :-)


--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>
Nov 14 '05 #37
On Wed, 01 Jun 2005 20:30:28 +0100, Mark McIntyre
<ma**********@spamcop.net> wrote:
On Wed, 01 Jun 2005 00:59:15 GMT, in comp.lang.c , pete
<pf*****@mindspring.com> wrote:

Which code is faster, isn't part of the definition of the language.


Absolutely. Asserting something about which code is faster is wildly
offtopic in CLC, even if correct. Which it isn't. :-)


Ooowww, you slipped up there Mark. Now _you_ are asserting things and
need to come up with universal proofs ;-)

Nov 14 '05 #38

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: Sunil Menon | last post by:
Dear All, A class having no member variables and only a method sizeof(object) will return 1byte in ANSI and two bytes in Unicode. I have the answer for this of how in works in ANSI. But I don't...
2
by: Xiangliang Meng | last post by:
Hi, all. What will we get from sizeof(a class without data members and virtual functions)? For example: class abnormity { public: string name() { return "abnormity"; }
19
by: Martin Pohlack | last post by:
Hi, I have a funtion which shall compute the amount for a later malloc. In this function I need the sizes of some struct members without having an instance or pointer of the struct. As...
9
by: M Welinder | last post by:
This doesn't work with any C compiler that I can find. They all report a syntax error: printf ("%d\n", (int)sizeof (char)(char)2); Now the question is "why?" "sizeof" and "(char)" have...
7
by: dam_fool_2003 | last post by:
#include<stdio.h> int main(void) { unsigned int a=20,b=50, c = sizeof b+a; printf("%d\n",c); return 0; } out put: 24
42
by: Christopher C. Stacy | last post by:
Some people say sizeof(type) and other say sizeof(variable). Why?
8
by: junky_fellow | last post by:
Consider the following piece of code: #include <stddef.h> int main (void) { int i, j=1; char c; printf("\nsize =%lu\n", sizeof(i+j));
90
by: pnreddy1976 | last post by:
Hi, How can we write a function, which functionality is similar to sizeof function any one send me source code Reddy
32
by: Abhishek Srivastava | last post by:
Hi, Somebody recently asked me to implement the sizeof operator, i.e. to write a function that accepts a parameter of any type, and without using the sizeof operator, should be able to return...
5
by: Francois Grieu | last post by:
Does this reliably cause a compile-time error when int is not 4 bytes ? enum { int_size_checked = 1/(sizeof(int)==4) }; Any better way to check the value of an expression involving sizeof...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.