By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,948 Members | 860 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,948 IT Pros & Developers. It's quick & easy.

Some Questions #2

P: n/a
some other doubts:

1) K&R, p50, 2.10:

"if expr1 and expr2 are expressions, then
expr1 op= expr2
is equivalent to
expr1 = (expr1) op (expr2)
except that expr1 is computed only once."

what does "except that expr1 is computed only once" mean? can you make
any example in which this propriety is important?

2) is it true that the value of a character constant such as 'C' depends
on the system character set (ASCII, EBCDIC, ...)? so if i write
something as 'C' - 20, will my program become not portable?

3) is it true that a char *must* be 1 byte? here instead
http://www-ccs.ucsd.edu/c/types.html...nteger%20Types is written
that the requisite is that the *minimum* range for unsigned char, for
example, must be from 0 to 255, so it may be 2 bytes or 200 bytes
without any problem.

4) i have some problems at distinguish declaration from definitions.

for example:

double function(double x, double y); /* prototype (a kind of function
declaration) */

double function(double x, double y) { /* function definition */

int a; /* definition? declaration? */

}

also, are 'double x, double y' in the function definition two
definitions or two declarations?

thanks.
Apr 14 '06 #1
Share this Question
Share on Google+
17 Replies


P: n/a
fctk wrote:
some other doubts:

1) K&R, p50, 2.10:

"if expr1 and expr2 are expressions, then
expr1 op= expr2
is equivalent to
expr1 = (expr1) op (expr2)
except that expr1 is computed only once."

what does "except that expr1 is computed only once" mean? can you make
any example in which this propriety is important?
Hard to say. The value of expr1 is computed only once in both cases.
2) is it true that the value of a character constant such as 'C' depends
on the system character set (ASCII, EBCDIC, ...)? so if i write
something as 'C' - 20, will my program become not portable?
Not necessarily. It depends on what you do with the result.
3) is it true that a char *must* be 1 byte? here instead
http://www-ccs.ucsd.edu/c/types.html...nteger%20Types is written
that the requisite is that the *minimum* range for unsigned char, for
example, must be from 0 to 255, so it may be 2 bytes or 200 bytes
without any problem.
Yes. The terms char and byte are generally interchangeable and
'sizeof(char)' is guaranteed 1.
4) i have some problems at distinguish declaration from definitions.

for example:

double function(double x, double y); /* prototype (a kind of function
declaration) */

double function(double x, double y) { /* function definition */

int a; /* definition? declaration? */

} You've got those two right.
also, are 'double x, double y' in the function definition two
definitions or two declarations?
A declaration presents information to the compiler without affecting
memory (storage) at all. Function prototypes are declarations.

A definition is also a declaration (info for the compiler) but it
demands space for itself in memory.
thanks.

--
Joe Wright
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Apr 14 '06 #2

P: n/a


fctk wrote On 04/14/06 13:21,:
some other doubts:

1) K&R, p50, 2.10:

"if expr1 and expr2 are expressions, then
expr1 op= expr2
is equivalent to
expr1 = (expr1) op (expr2)
except that expr1 is computed only once."

what does "except that expr1 is computed only once" mean? can you make
any example in which this propriety is important?
array[ expensive_function(x) ] += 1;

It becomes even more important if the computation doesn't
always return the same answer:

array[ getchar() ] += 1;
2) is it true that the value of a character constant such as 'C' depends
on the system character set (ASCII, EBCDIC, ...)? so if i write
something as 'C' - 20, will my program become not portable?
Yes, it is true that the numeric value of 'C' depends on
the character encoding. Yes, it is true that doing arithmetic
of the kind you've shown produces a result whose meaning will
vary from one implementation to another. (Exception: the codes
for the digits must be consecutive and in ascending order, so
'0'+1 == '1', ..., '8'+1 == '9'. However, 'a'+1 == 'b' is not
guaranteed.)
3) is it true that a char *must* be 1 byte? here instead
http://www-ccs.ucsd.edu/c/types.html...nteger%20Types is written
that the requisite is that the *minimum* range for unsigned char, for
example, must be from 0 to 255, so it may be 2 bytes or 200 bytes
without any problem.
In C, a byte is defined as synonymous with a char. Some
other fields use the term "byte" to mean "a unit of storage
comprising eight bits," but C does not: in C, a byte is a char
no matter how wide a char is.

This is really the same situation that confronts you in
understanding other words: Different realms of discourse use
the same word with different meanings, and you must know the
context to find the proper meaning. For example, consider the
various meanings "file" might have if you're talking about

- how to use the fopen() function

- the formations exhibited by a marching band

- the contents of a metalworker's toolbox

- recipes from a Cajun cookbook (well, the pronunciation
might tip you off about this one)
4) i have some problems at distinguish declaration from definitions.

for example:

double function(double x, double y); /* prototype (a kind of function
declaration) */

double function(double x, double y) { /* function definition */

int a; /* definition? declaration? */

}
A declaration describes the type of object or function
an identifier designates. A definition also brings the
object or function into existence, allocating storage for
objects or providing the executable code for functions.
Every definition is also a declaration, but a declaration
is not necessarily a definition. When I say "fctk is a
student of C" I make a declaration about what "fctk" means;
a definition is more like "fctk is a student of C, and here
he is (smile, fctk, smile and wave)."

A prototype is a description of a function's parameters.
A prototype may appear as part of a function declaration or
as part of a function definition, but the prototype is only
the part about the parameter list. Your first example is a
function declaration using a prototype; your second is a
function definition using a prototype. Neither of them "is"
a prototype in its entirety.
also, are 'double x, double y' in the function definition two
definitions or two declarations?


Formally speaking, they are declarations. Note that some
of the "decorations" that are possible with definitions are
not possible with function parameters; these are illegal:

double f(static double x, extern double y,
double z = 42.0, typedef double trouble)

Function parameters don't need definitions as such; if you
want, you can think of them as being "defined" as part of
the process of calling the function. Somehow the compiler
causes the parameters to come into existence and be initialized
with the argument values as part of the function invocation,
and somehow the compiler causes them to vanish again when the
function returns. If you are interested in how compilers and
related tools work, the mechanics of this buck-passing are of
importance to you; if you simply need to use C, they are not.

--
Er*********@sun.com

Apr 14 '06 #3

P: n/a
On Fri, 14 Apr 2006 19:21:37 +0200, in comp.lang.c , fctk <-> wrote:
what does "except that expr1 is computed only once" mean? can you make
any example in which this propriety is important?
it would be important if "expr1" changed each time you invoked it, for
example x++
2) is it true that the value of a character constant such as 'C' depends
on the system character set (ASCII, EBCDIC, ...)? so if i write
something as 'C' - 20, will my program become not portable?
Yes.
3) is it true that a char *must* be 1 byte?
Yes. This is the definition of a byte in C.
here instead
http://www-ccs.ucsd.edu/c/types.html...nteger%20Types is written
that the requisite is that the *minimum* range for unsigned char, for
example, must be from 0 to 255, so it may be 2 bytes or 200 bytes
without any problem.
Its true the min range must be 0-255, but this onl defines the minimum
number of bits. A "byte" is defined in C to be as large as the number
of bits in a character, even if that is 8, 9, 32 or 36.
4) i have some problems at distinguish declaration from definitions.
A declaration doesn't define something. If the statement either sets
the value of something, or contains code which will do so, then its a
definition.
also, are 'double x, double y' in the function definition two
definitions or two declarations?


In a declaration, they're neither (since you can leave them out and it
doesn't make any difference).
In a function definition, they're declarations of the parameters.
Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Apr 14 '06 #4

P: n/a
fctk wrote:

some other doubts:

1) K&R, p50, 2.10:

"if expr1 and expr2 are expressions, then
expr1 op= expr2
is equivalent to
expr1 = (expr1) op (expr2)
except that expr1 is computed only once."

what does "except that expr1 is computed only once" mean? can you make
any example in which this propriety is important?
Consider:

*pt++ += foo;

Or:

*SomeFunctionThatTakesALongTime() += bar;

Or:

*NextItemInList() += foobar;
2) is it true that the value of a character constant such as 'C' depends
on the system character set (ASCII, EBCDIC, ...)? so if i write
Correct.
something as 'C' - 20, will my program become not portable?
It depends on what you do with 'C'-20. If you expect it to be '/', then
it won't necessarily work on non-ASCII systems. If there isn't any
particular significance to 'C' and 20, then it may be just fine.

But things like adding 32 to an uppercase letter and expecting to get
the corresponding lowercase letter is definitely non-portable if you
intend to use a non-ASCII system somewhere in the future.
3) is it true that a char *must* be 1 byte? here instead
http://www-ccs.ucsd.edu/c/types.html...nteger%20Types is written
that the requisite is that the *minimum* range for unsigned char, for
example, must be from 0 to 255, so it may be 2 bytes or 200 bytes
without any problem.


Well, "sizeof(char)" must be "1". But, if by "byte" you mean "a group
of 8 bits", then the answer to your question is "no", as a char may be
larger than 8 bits.

[...]

--
+-------------------------+--------------------+-----------------------------+
| Kenneth J. Brody | www.hvcomputer.com | |
| kenbrody/at\spamcop.net | www.fptech.com | #include <std_disclaimer.h> |
+-------------------------+--------------------+-----------------------------+
Don't e-mail me at: <mailto:Th*************@gmail.com>
Apr 14 '06 #5

P: n/a
Kenneth Brody <ke******@spamcop.net> writes:
fctk wrote:

[...]
3) is it true that a char *must* be 1 byte? here instead
http://www-ccs.ucsd.edu/c/types.html...nteger%20Types is written
that the requisite is that the *minimum* range for unsigned char, for
example, must be from 0 to 255, so it may be 2 bytes or 200 bytes
without any problem.


Well, "sizeof(char)" must be "1". But, if by "byte" you mean "a group
of 8 bits", then the answer to your question is "no", as a char may be
larger than 8 bits.


If by "byte" you mean "a group of 8 bits", you're not using the word
in the way defined by the C standard -- or by historical usage.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Apr 14 '06 #6

P: n/a
Keith Thompson wrote:
Kenneth Brody <ke******@spamcop.net> writes:
fctk wrote:

[...]
3) is it true that a char *must* be 1 byte? here instead
http://www-ccs.ucsd.edu/c/types.html...nteger%20Types is written
that the requisite is that the *minimum* range for unsigned char, for
example, must be from 0 to 255, so it may be 2 bytes or 200 bytes
without any problem.

Well, "sizeof(char)" must be "1". But, if by "byte" you mean "a group
of 8 bits", then the answer to your question is "no", as a char may be
larger than 8 bits.


If by "byte" you mean "a group of 8 bits", you're not using the word
in the way defined by the C standard -- or by historical usage.

You are, however, using the word as it's used by the majority of people
today. Not to imply that this gives that usage a privileged status, of course.

S.
Apr 14 '06 #7

P: n/a
On Fri, 14 Apr 2006 19:21:37 +0200, fctk <-> wrote:
some other doubts: snip4) i have some problems at distinguish declaration from definitions.

for example:

double function(double x, double y); /* prototype (a kind of function
declaration) */

double function(double x, double y) { /* function definition */

int a; /* definition? declaration? */

}

also, are 'double x, double y' in the function definition two
definitions or two declarations?

The best memory aid I have seen for distinguishing declarations from
definitions is

Think of definitions as in a dictionary. The book has mass
and occupies space. (It is real.)

Think of declarations as statements by politicians. They do
not guarantee the topic will ever really have a real existence. (They
are ephemeral.)

In C

A definition causes whatever is being defined to actually
exist in your program. If it is an object (variable, array, struct,
etc), then memory is reserved for it. If it is a function, then the
code is incorporated into your program. The existence of a definition
"overrides" any previous or subsequent declarations so you can say
that the definition serves as a declaration also.

A declaration serves only to describe the object or function,
that is its attributes. For example, the declaration
extern int x;
does not define x (memory is not allocated) but it does promise the
compiler that when the program is ready to execute x will be defined
somewhere and will have the correct size and alignment for an int. (It
is the linker's job to figure out how to resolve references to x but
that is a different topic.) Similarly, a function prototype does not
define the function but does promise the compiler that the function
will exist in the program and will take the arguments described and
return the object specified.
Remove del for email
Apr 15 '06 #8

P: n/a
>>3) is it true that a char *must* be 1 byte? here instead
http://www-ccs.ucsd.edu/c/types.html...nteger%20Types is written
that the requisite is that the *minimum* range for unsigned char, for
example, must be from 0 to 255, so it may be 2 bytes or 200 bytes
without any problem.

In C, a byte is defined as synonymous with a char. Some
other fields use the term "byte" to mean "a unit of storage
comprising eight bits," but C does not: in C, a byte is a char
no matter how wide a char is.


ok, in C "byte" and "char" are synonymous. but is it correct that char
does not necessary need to be 8 bits?
4) i have some problems at distinguish declaration from definitions.

for example:

double function(double x, double y); /* prototype (a kind of function
declaration) */

double function(double x, double y) { /* function definition */

int a; /* definition? declaration? */

} also, are 'double x, double y' in the function definition two
definitions or two declarations?

Formally speaking, they are declarations. Note that some
of the "decorations" that are possible with definitions are
not possible with function parameters; these are illegal:

double f(static double x, extern double y,
double z = 42.0, typedef double trouble)


why `double x, double y' in

double function(double x, double y) { /* ... */ }

are declarations? yes, they tell the compiler x and y refers to two
object of type double, but also the memory for them is allocated when
the function is called.

pretty as in:

void nothing(void) {
int a; /* definition */
}
Apr 15 '06 #9

P: n/a
fctk <-> said:
ok, in C "byte" and "char" are synonymous.
I think of "byte" as a capacity and "char" as the kind of thing that would
need a capacity in which to be stored. But yes, one char fits precisely
into one byte.
but is it correct that char
does not necessary need to be 8 bits?
Yes. I've used systems where chars need 32 bits (and therefore bytes are 32
bits wide).
why `double x, double y' in

double function(double x, double y) { /* ... */ }

are declarations? yes, they tell the compiler x and y refers to two
object of type double, but also the memory for them is allocated when
the function is called.


They are definitions as well as declarations. All object definitions and
function definitions are also declarations.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Apr 15 '06 #10

P: n/a
Mark McIntyre ha scritto:
On Fri, 14 Apr 2006 19:21:37 +0200, in comp.lang.c , fctk <-> wrote:
3) is it true that a char *must* be 1 byte?


Yes. This is the definition of a byte in C.
here instead
http://www-ccs.ucsd.edu/c/types.html...nteger%20Types is written
that the requisite is that the *minimum* range for unsigned char, for
example, must be from 0 to 255, so it may be 2 bytes or 200 bytes
without any problem.


Its true the min range must be 0-255, but this onl defines the minimum
number of bits. A "byte" is defined in C to be as large as the number
of bits in a character, even if that is 8, 9, 32 or 36.


let's see if i have understood.

C does not know that 1 byte == 8 bits.
C needs an unsigned char to be able to represent numbers from 0 to 255.
So at least 8 bits are needed. But nothing prohibit an unsigned char to
be 10 bits, so it can represent numbers from 0 to 1023.
C defines 1 byte as the number of bits occupied by a char (in this case,
10 bits).
sizeof(type) returns the number of bytes needed by type ("byte" defined
as the number of bits occupied by a char).
So of course sizeof(char) == 1.

Question: is it true that the number of bits needed by a certain type
must be a multiple of those needed by char?

So for example in my case, if char is 10 bits, then an int can be, say,
20 bits, but it can't be 16 bits.
Apr 15 '06 #11

P: n/a
On Sat, 15 Apr 2006 12:12:36 +0200, in comp.lang.c , fctk <-> wrote:
let's see if i have understood.

C does not know that 1 byte == 8 bits.
C needs an unsigned char to be able to represent numbers from 0 to 255.
So at least 8 bits are needed. But nothing prohibit an unsigned char to
be 10 bits, so it can represent numbers from 0 to 1023.
C defines 1 byte as the number of bits occupied by a char (in this case,
10 bits).
sizeof(type) returns the number of bytes needed by type ("byte" defined
as the number of bits occupied by a char).
So of course sizeof(char) == 1.
All correct.
Question: is it true that the number of bits needed by a certain type
must be a multiple of those needed by char?


I don't believe theres any such requirement, but any actual hardware
implementation would be likely to go for that sort of layout.
Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Apr 15 '06 #12

P: n/a
On Sat, 15 Apr 2006 11:45:43 +0200, in comp.lang.c , fctk <-> wrote:

why `double x, double y' in

double function(double x, double y) { /* ... */ }

are declarations? yes, they tell the compiler x and y refers to two
object of type double, but also the memory for them is allocated when
the function is called.


This is true of all objects - when the function is called, all objects
it declares musts have memory allocated. The difference is that a
definition has a value.
Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Apr 15 '06 #13

P: n/a
On Sat, 15 Apr 2006 12:12:36 +0200, fctk <-> wrote:
Mark McIntyre ha scritto:
On Fri, 14 Apr 2006 19:21:37 +0200, in comp.lang.c , fctk <-> wrote:
3) is it true that a char *must* be 1 byte?
Yes. This is the definition of a byte in C.
here instead
http://www-ccs.ucsd.edu/c/types.html...nteger%20Types is written
that the requisite is that the *minimum* range for unsigned char, for
example, must be from 0 to 255, so it may be 2 bytes or 200 bytes
without any problem.


Its true the min range must be 0-255, but this onl defines the minimum
number of bits. A "byte" is defined in C to be as large as the number
of bits in a character, even if that is 8, 9, 32 or 36.


let's see if i have understood.

C does not know that 1 byte == 8 bits.
C needs an unsigned char to be able to represent numbers from 0 to 255.
So at least 8 bits are needed. But nothing prohibit an unsigned char to
be 10 bits, so it can represent numbers from 0 to 1023.
C defines 1 byte as the number of bits occupied by a char (in this case,
10 bits).
sizeof(type) returns the number of bytes needed by type ("byte" defined
as the number of bits occupied by a char).
So of course sizeof(char) == 1.

Question: is it true that the number of bits needed by a certain type
must be a multiple of those needed by char?


Every object has a size which is some multiple of the sizeof(char).
The object therefore occupies CHAR_BIT*sizeof(char) bits.

Whether they are all needed is a different question. Some of the bits
may be unneeded and would be called padding. Consider
struct{int i; char c;}s;
on a system with 4-byte aligned int and 8 bit bytes. s only needs 5
bytes (40 bits) but would be padded to 8 bytes.

Even a scalar object can have padding. I used to work on a system
where a pointer was 24 bits but occupied a 32 bit word. The high
order 8 bits were unused in address computation.

So for example in my case, if char is 10 bits, then an int can be, say,
20 bits, but it can't be 16 bits.


While sizeof(int) would be 2, INT_MAX could still be 65535 instead of
1048575.
Remove del for email
Apr 15 '06 #14

P: n/a
Mark McIntyre <ma**********@spamcop.net> writes:
On Sat, 15 Apr 2006 12:12:36 +0200, in comp.lang.c , fctk <-> wrote:

[...]
Question: is it true that the number of bits needed by a certain type
must be a multiple of those needed by char?


I don't believe theres any such requirement, but any actual hardware
implementation would be likely to go for that sort of layout.


Any object other than a bit field has a size that is a whole number of
bytes. (Not all the bits necessarily contribute to the representation
of the object's value.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Apr 15 '06 #15

P: n/a
Barry Schwarz <sc******@doezl.net> writes:
On Sat, 15 Apr 2006 12:12:36 +0200, fctk <-> wrote:

[...]
Question: is it true that the number of bits needed by a certain type
must be a multiple of those needed by char?


Every object has a size which is some multiple of the sizeof(char).
The object therefore occupies CHAR_BIT*sizeof(char) bits.


I think you mean CHAR_BIT * sizeof(object) bits.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Apr 15 '06 #16

P: n/a
fctk <-> writes:
[...]
let's see if i have understood.

C does not know that 1 byte == 8 bits.

[...]

Yes, but I wouldn't have phrased it that way; your wording implies
that a byte is *really* 8 bits, but C doesn't know that.

C has its own definition of "byte", namely an "addressable unit of
data storage large enough to hold any member of the basic character
set of the execution environment" (C99 3.6). The number of bits in a
byte is specified by the CHAR_BIT macro in <limits.h>, and is required
to be at least 8.

In contexts other than C, a "byte" is almost universally 8 bits, but
historically many systems have had bytes sizes other than 8 bits.
Fairly common values are 6 and 9 bits, and some systems have even had
variable bytes sizes.

If you want to refer specifically to 8-bit quantities, you can use the
word "octet".

(Possibly it would have been less confusing if C had chosen a word
other than "byte". I think the assumption that a byte is exactly 8
bits wasn't as strong when the language was first being designed, so
having the size of a byte vary from one implementation to another
seemed less odd than it does not.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Apr 15 '06 #17

P: n/a
On Sat, 15 Apr 2006 19:53:45 GMT, in comp.lang.c , Keith Thompson
<ks***@mib.org> wrote:
Mark McIntyre <ma**********@spamcop.net> writes:
On Sat, 15 Apr 2006 12:12:36 +0200, in comp.lang.c , fctk <-> wrote:

[...]
Question: is it true that the number of bits needed by a certain type
must be a multiple of those needed by char?


I don't believe theres any such requirement, but any actual hardware
implementation would be likely to go for that sort of layout.


Any object other than a bit field has a size that is a whole number of
bytes. (Not all the bits necessarily contribute to the representation
of the object's value.)


Right, I was thinking of value bits when I wrote the above, thanks for
the clarification / correction.

Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Apr 15 '06 #18

This discussion thread is closed

Replies have been disabled for this discussion.