473,770 Members | 1,785 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

sizeof...

Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!

Nov 14 '05 #1
37 1814
th************@ hotmail.com wrote:
Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!


It depends on compiler - 32 bit compilers (GCC etc) returns 4 bytes per
integer and old 16 bit compilers (PacifiC, Turbo C) returns 2 bytes per
integer.

--
---------------------------------------------
www.cepa.one.pl
FreeBSD r0xi qrde ;-]
Nov 14 '05 #2
On 29 May 2005 07:02:12 -0700, "th************ @hotmail.com"
<th************ @hotmail.com> wrote:
Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!


DOS was made for the old 8086/88 processors. These were 16 bit
processors (thus : int being 2 bytes). On newer machines, the CPU runs
in a compatibility mode when running DOS (either real mode or V86
mode). It doesn't mean that the processor is suddenly a 16 bit
processor, it only acts like one.

So, in this case sizeof doesn't show the register size (which is 32
bits). Only the default operand size, which is 16 bits in this
compatibility mode.

Nov 14 '05 #3
th************@ hotmail.com wrote on 29/05/05 :
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?


sizeof returns the number of byte occupied by a constant or an object.

Yes, an int can use 2 char on x86 real mode (PC/Ms-Dos) and 4 char on
x86 protected/extended mode (PC/Win32, Linux). It could also use 1 char
on a DSP TMS320C54 (Texas Instrument) where a char and and int are both
16-bit width.

--
Emmanuel
The C-FAQ: http://www.eskimo.com/~scs/C-faq/faq.html
The C-library: http://www.dinkumware.com/refxc.html

"Mal nommer les choses c'est ajouter du malheur au
monde." -- Albert Camus.

Nov 14 '05 #4
Paul Mesken <us*****@eurone t.nl> writes:
On 29 May 2005 07:02:12 -0700, "th************ @hotmail.com"
<th************ @hotmail.com> wrote:
Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!


DOS was made for the old 8086/88 processors. These were 16 bit
processors (thus : int being 2 bytes). On newer machines, the CPU runs
in a compatibility mode when running DOS (either real mode or V86
mode). It doesn't mean that the processor is suddenly a 16 bit
processor, it only acts like one.

So, in this case sizeof doesn't show the register size (which is 32
bits). Only the default operand size, which is 16 bits in this
compatibility mode.


sizeof(int) simply yields the size, in bytes, of an int. It's up to
the compiler implementer to decide how big an int is going to be.
It's often whatever size will fit in a machine register, but that's
not required.

On the hardware level, an x86 machine (which I assume is what the OP
is using) can operate on 8-bit, 16-bit, or 32-bit quantities. The C
language specifies a range of integer types: char (at least 8 bits),
short (at least 16 bits), int (at least 16 bits), and long (at least
32 bits). (On some older machines, 32-bit operations might be more
difficult). As you can see, a compiler implementer has some
flexibility in choosing how to assign sizes to the various C types.
Typical choices are:
char 8 bits
short 16 bits
int 16 bits
long 32 bits
and
char 8 bits
short 16 bits
int 32 bits
long 32 bits

It happens that the former choice (16-bit int) is more convenient on
the older systems that DOS was designed for, and the latter (32-bit
int) is more convenient for the newer systems on which Linux typically
runs.

(I'm ignoring the type "long long", introduced in C99, which is at
least 64 bits.)

All this is only indirectly related to the "default operand size"; I'm
not even sure what that means.

--
Keith Thompson (The_Other_Keit h) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #5

<th************ @hotmail.com> wrote
Greetings!!
Currently i am learning C from K&R.I have DOS & Linux in my
system.When i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!

All C objects are a multiple of sizeof(char), which is defined to be one.
Usually char is 8 bits, but not always.
int is designed to be the "natural" integer size of the machine. On some
machines it is not obvious what the natural integer size should be, and the
intermediate x86 chips are a case in point, because of the way the
instruction set allows registers to be treated as pairs.

In any case there is no absoloute guarantee that an int will fit in a
register. int must be at least 16 bits, and C compilers are available for
processors with 8 bit registers.
Nov 14 '05 #6
On Sun, 29 May 2005 21:12:16 GMT, Keith Thompson <ks***@mib.or g>
wrote:
Paul Mesken <us*****@eurone t.nl> writes:
On 29 May 2005 07:02:12 -0700, "th************ @hotmail.com"
<th************ @hotmail.com> wrote:
Greetings! !
Currently i am learning C from K&R.I have DOS & Linux in my
system.Whe n i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!
DOS was made for the old 8086/88 processors. These were 16 bit
processors (thus : int being 2 bytes). On newer machines, the CPU runs
in a compatibility mode when running DOS (either real mode or V86
mode). It doesn't mean that the processor is suddenly a 16 bit
processor, it only acts like one.

So, in this case sizeof doesn't show the register size (which is 32
bits). Only the default operand size, which is 16 bits in this
compatibility mode.


sizeof(int) simply yields the size, in bytes, of an int. It's up to
the compiler implementer to decide how big an int is going to be.
It's often whatever size will fit in a machine register, but that's
not required.

On the hardware level, an x86 machine (which I assume is what the OP
is using) can operate on 8-bit, 16-bit, or 32-bit quantities.


Yes, 16 bit and 32 bit operations can be mixed but one of them will
most certainly (there are some exceptions) incur a performance penalty
since it requires a size prefix. Which one (16 or 32) depends on what
the "default bit" of the code segment is. With an assembler, the
programmer sets the default operand size of the code segment by
instructions like USE16 or USE32.
The C
language specifies a range of integer types: char (at least 8 bits),
short (at least 16 bits), int (at least 16 bits), and long (at least
32 bits). (On some older machines, 32-bit operations might be more
difficult). As you can see, a compiler implementer has some
flexibility in choosing how to assign sizes to the various C types.
Typical choices are:
char 8 bits
short 16 bits
int 16 bits
long 32 bits
and
char 8 bits
short 16 bits
int 32 bits
long 32 bits

It happens that the former choice (16-bit int) is more convenient on
the older systems that DOS was designed for, and the latter (32-bit
int) is more convenient for the newer systems on which Linux typically
runs.

(I'm ignoring the type "long long", introduced in C99, which is at
least 64 bits.)

All this is only indirectly related to the "default operand size"; I'm
not even sure what that means.


Consider this :

MOV EAX, 0x01020304
MOV AX, 0x0102

These two instructions load an immediate value into a register.

The first one uses all 32 bits of the EAX register (the general
purpose registers of the x86 became 32 bits starting with the 386).

The second one uses all 16 bits of the AX register (which is really
the lower 16 bits of the EAX register).

BUT both have the same opcode : 0xb8.

This seems strange since the operations are really different : one is
16 bits, the other is 32 bits. The CPU _cannot_ distinguish between
the two different operations because they have the same opcode.

However : the CPU uses the "default bit" (or "D bit") of the code
segment to establish whether a 16 bit operand size is the default and,
thus, the lower 16 bits of EAX (aka AX) are used OR a 32 bit operand
is the default and, thus, all of the 32 bits of EAX are used.

One can, however, change this behaviour by using a size prefix (0x66)
so that the behaviour according to the D bit of the code segment is
inverted. Of course, the assembler might do this for the programmer
automatically based on whether "EAX" or "AX" is used in this example.

However, using such a prefix (and there are more such prefixes for
mixing 16 and 32 bit code) comes with a performance penalty.

Since DOS works in real or V86 mode in which 16 bit is the default
size (after all : it mimics the 8086/88), it makes perfect sense to
have int being 16 bits (for the 8086/88, there wasn't even an option
since the registers were only 16 bits wide, using 32 bits would be
dreadfully slow). Even though the Standard doesn't require it, int is
supposed to be the "quick" datatype.
Nov 14 '05 #7
On Mon, 30 May 2005 00:33:01 +0200, Paul Mesken <us*****@eurone t.nl>
wrote in comp.lang.c:
On Sun, 29 May 2005 21:12:16 GMT, Keith Thompson <ks***@mib.or g>
wrote:
Paul Mesken <us*****@eurone t.nl> writes:
On 29 May 2005 07:02:12 -0700, "th************ @hotmail.com"
<th************ @hotmail.com> wrote:

Greetings! !
Currently i am learning C from K&R.I have DOS & Linux in my
system.Whe n i used sizeof() keyword to compute the size of integer , it
shows different results in DOS and Linux.In DOS it shows integer
occupies 2 bytes , but in Linux it shows 4 bytes.Is sizeof operator
"really" shows the machine data or register size ?
Please help
me!!

DOS was made for the old 8086/88 processors. These were 16 bit
processors (thus : int being 2 bytes). On newer machines, the CPU runs
in a compatibility mode when running DOS (either real mode or V86
mode). It doesn't mean that the processor is suddenly a 16 bit
processor, it only acts like one.

So, in this case sizeof doesn't show the register size (which is 32
bits). Only the default operand size, which is 16 bits in this
compatibility mode.


sizeof(int) simply yields the size, in bytes, of an int. It's up to
the compiler implementer to decide how big an int is going to be.
It's often whatever size will fit in a machine register, but that's
not required.

On the hardware level, an x86 machine (which I assume is what the OP
is using) can operate on 8-bit, 16-bit, or 32-bit quantities.


Yes, 16 bit and 32 bit operations can be mixed but one of them will
most certainly (there are some exceptions) incur a performance penalty
since it requires a size prefix. Which one (16 or 32) depends on what
the "default bit" of the code segment is. With an assembler, the
programmer sets the default operand size of the code segment by
instructions like USE16 or USE32.


What does assembly language or modes of the Intel processor have to do
with C? Caring about whether or not there is a performance penalty
for one thing or another thing is off-topic here, is a micro
optimization, and has nothing to do with the C language, which does
not specify anything at all about the speed or efficiency of anything.
The C
language specifies a range of integer types: char (at least 8 bits),
short (at least 16 bits), int (at least 16 bits), and long (at least
32 bits). (On some older machines, 32-bit operations might be more
difficult). As you can see, a compiler implementer has some
flexibility in choosing how to assign sizes to the various C types.
Typical choices are:
char 8 bits
short 16 bits
int 16 bits
long 32 bits
and
char 8 bits
short 16 bits
int 32 bits
long 32 bits

It happens that the former choice (16-bit int) is more convenient on
the older systems that DOS was designed for, and the latter (32-bit
int) is more convenient for the newer systems on which Linux typically
runs.

(I'm ignoring the type "long long", introduced in C99, which is at
least 64 bits.)

All this is only indirectly related to the "default operand size"; I'm
not even sure what that means.


Consider this :

MOV EAX, 0x01020304
MOV AX, 0x0102

[snip much extremely off-topic text]

If you want to expound about the bizarre quirks of the Frankenstein
patchwork Intel X86 architecture, I'd suggest you do so in
news:comp.lang. asm.x86, where it is appreciated and topical.

It is neither of those things here.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.l earn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Nov 14 '05 #8
On Sun, 29 May 2005 18:31:15 -0500, Jack Klein <ja*******@spam cop.net>
wrote:
Caring about whether or not there is a performance penalty
for one thing or another thing is off-topic here, is a micro
optimization , and has nothing to do with the C language, which does
not specify anything at all about the speed or efficiency of anything.
Nownownow Jack, that's not the right attitude for a C programmer ;-)

Don't you think our non-Assembly programming C brothers/sisters have a
right to know that a division is typically slower than an addition,
for example? Or is such a statement off topic here because the
Standard doesn't mention this? (even though it is true in the real
world).

If speed and efficiency are of no concern then there are better
languages than C. C is not the only portable language (how about
Java?).

Isn't it so that C is a "Language of Choice" because it's "close to
the metal"? Computer architecture _is_ that "metal". A lot of
decisions made for the implementation of the compiler make more sense
(like the difference in sizeof(int) the OP experienced) when the
underlying computer architecture (which the compiler targets) is
explained.

Of course, we could deny the fact that programs written in C are
actually meant to run on a computer. But this would reduce discussion
to a completely academical one, interesting only to mathematicians and
linguists and devoid of practical experience. We would just be quoting
the Standard all the time.
If you want to expound about the bizarre quirks of the Frankenstein
patchwork Intel X86 architecture, I'd suggest you do so in
news:comp.lang .asm.x86, where it is appreciated and topical.


Well, the question wasn't asked there and even though it's obvious
that you want me to save that group, I'm a bit rusty (from too much
C/C++/SQL programming) and afraid that Terje would come up with
solutions quicker than my own ;-)

Nov 14 '05 #9
Paul Mesken wrote:
On Sun, 29 May 2005 18:31:15 -0500, Jack Klein <ja*******@spam cop.net>
wrote:

Caring about whether or not there is a performance penalty
for one thing or another thing is off-topic here, is a micro
optimizatio n, and has nothing to do with the C language, which does
not specify anything at all about the speed or efficiency of anything.

Nownownow Jack, that's not the right attitude for a C programmer ;-)


Actually it is.
Don't you think our non-Assembly programming C brothers/sisters have a
right to know that a division is typically slower than an addition,
for example? Or is such a statement off topic here because the
Standard doesn't mention this? (even though it is true in the real
world).
All of which would be relevant if `implementation ' were part of the name
of this newsgroup -- or if the standard set out any particular
requirements about how constructs behave.
If speed and efficiency are of no concern then there are better
languages than C. C is not the only portable language (how about
Java?).
All of which would be relevant if `advocacy' were part of the title of
this newsgroup.
Isn't it so that C is a "Language of Choice" because it's "close to
the metal"? Computer architecture _is_ that "metal". A lot of
decisions made for the implementation of the compiler make more sense
(like the difference in sizeof(int) the OP experienced) when the
underlying computer architecture (which the compiler targets) is
explained.
No, it's just an area where implementations are free to make certain
choices, according to the standard.
Of course, we could deny the fact that programs written in C are
actually meant to run on a computer. But this would reduce discussion
to a completely academical one, interesting only to mathematicians and
linguists and devoid of practical experience. We would just be quoting
the Standard all the time.
Programs written in C are designed to run on the virtual machine that
the language definition provides. The underlying platform could just as
easily be a computer or a flock of cooperative pigeons.

Topicality is important, serving two purposes: Maintaining the
signal/noise ratio at an approriate level (lest those who *can* answer
questions bail) and limiting the subject matter in order that answers
can be properly vetted by the community.

Cheers,
--ag

--
Artie Gold -- Austin, Texas
http://it-matters.blogspot.com (new post 12/5)
http://www.cafepress.com/goldsays
Nov 14 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
3556
by: Sunil Menon | last post by:
Dear All, A class having no member variables and only a method sizeof(object) will return 1byte in ANSI and two bytes in Unicode. I have the answer for this of how in works in ANSI. But I don't know it returns two bytes in UniCode. Please help... For ANSI: In ISO/ANSI C++ Standard, 5.3.3 § 1, it stays: "The sizeof operator yields the number of bytes in the object representation of its
2
2469
by: Xiangliang Meng | last post by:
Hi, all. What will we get from sizeof(a class without data members and virtual functions)? For example: class abnormity { public: string name() { return "abnormity"; }
19
9237
by: Martin Pohlack | last post by:
Hi, I have a funtion which shall compute the amount for a later malloc. In this function I need the sizes of some struct members without having an instance or pointer of the struct. As "sizeof(int)" is legal I assumed "sizeof(struct x.y)" to be legal too. But is is not: #include <dirent.h>
9
3025
by: M Welinder | last post by:
This doesn't work with any C compiler that I can find. They all report a syntax error: printf ("%d\n", (int)sizeof (char)(char)2); Now the question is "why?" "sizeof" and "(char)" have identical precedence and right-to-left parsing, so why isn't the above equivalent to printf ("%d\n", (int)sizeof ((char)(char)2));
7
1939
by: dam_fool_2003 | last post by:
#include<stdio.h> int main(void) { unsigned int a=20,b=50, c = sizeof b+a; printf("%d\n",c); return 0; } out put: 24
42
2416
by: Christopher C. Stacy | last post by:
Some people say sizeof(type) and other say sizeof(variable). Why?
8
2538
by: junky_fellow | last post by:
Consider the following piece of code: #include <stddef.h> int main (void) { int i, j=1; char c; printf("\nsize =%lu\n", sizeof(i+j));
90
8482
by: pnreddy1976 | last post by:
Hi, How can we write a function, which functionality is similar to sizeof function any one send me source code Reddy
32
2592
by: Abhishek Srivastava | last post by:
Hi, Somebody recently asked me to implement the sizeof operator, i.e. to write a function that accepts a parameter of any type, and without using the sizeof operator, should be able to return the size occupied by that datatype in memory in bytes. Thanks :) Abhishek Srivastava
5
2900
by: Francois Grieu | last post by:
Does this reliably cause a compile-time error when int is not 4 bytes ? enum { int_size_checked = 1/(sizeof(int)==4) }; Any better way to check the value of an expression involving sizeof before runtime ? I also have: { void check_foo_size(void);
0
9617
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9454
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10257
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10099
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10037
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8931
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
6710
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5354
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
4007
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.