By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,837 Members | 1,195 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,837 IT Pros & Developers. It's quick & easy.

What Defins the "C" Language?

P: n/a
In Harbison and Steele's text (fourth edition, p.111)
it is stated,

The C language does not specify the range of integers that the
integral types will represent, except ot say that type int may not
be smaller than short and long may not be smaller than int.

They go on to say,

Many implementations represent characters in 8 bits, type short in
16 bits, and type long in 32 bits, with type int using either 16 or
32 bits depending on the implementation. ISO C requires
implementations to use at least these widths.

If the C language is not defined by ISO C, then what defines it?
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
ra*********@sonyericsson.com, 919-472-1124
Nov 14 '05 #1
Share this Question
Share on Google+
86 Replies


P: n/a
"Randy Yates" <ra*********@sonyericsson.com> wrote in message
news:xx*************@usrts005.corpusers.net...
In Harbison and Steele's text (fourth edition, p.111)
it is stated,

The C language does not specify the range of integers that the
integral types will represent, except ot say that type int may not
be smaller than short and long may not be smaller than int.

They go on to say,

Many implementations represent characters in 8 bits, type short in
16 bits, and type long in 32 bits, with type int using either 16 or
32 bits depending on the implementation. ISO C requires
implementations to use at least these widths.

If the C language is not defined by ISO C, then what defines it?


There are requirements for conforming implementations, specified in ISO
standard documents, such as those you quote above. There is some leeway in
these requirements, as illustrated above, presumably to allow appropriate
choices to be made when creating an implementation for some specific
platform.

What is in the quotes that makes you think it is not defined by ISO C?

Alex
Nov 14 '05 #2

P: n/a
Alex Fraser wrote:
"Randy Yates" <ra*********@sonyericsson.com> wrote in message
news:xx*************@usrts005.corpusers.net...
In Harbison and Steele's text (fourth edition, p.111)
it is stated,

The C language does not specify the range of integers that the
integral types will represent, except ot say that type int may not
be smaller than short and long may not be smaller than int.

They go on to say,

Many implementations represent characters in 8 bits, type short in
16 bits, and type long in 32 bits, with type int using either 16 or
32 bits depending on the implementation. ISO C requires
implementations to use at least these widths.

If the C language is not defined by ISO C, then what defines it?

There are requirements for conforming implementations, specified in ISO
standard documents, such as those you quote above. There is some leeway in
these requirements, as illustrated above, presumably to allow appropriate
choices to be made when creating an implementation for some specific
platform.

What is in the quotes that makes you think it is not defined by ISO C?

Looks to me like he's asking if ISO doesn't define this'n'that(integer
sizes etc.) Who does for a given arch/compiler ?

--
Nils O. Selåsdal
www.utelsystems.com
Nov 14 '05 #3

P: n/a
"Nils O. Selåsdal" <NO*@Utel.no> writes:
Alex Fraser wrote:
"Randy Yates" <ra*********@sonyericsson.com> wrote in message
news:xx*************@usrts005.corpusers.net...

In Harbison and Steele's text (fourth edition, p.111)
it is stated,

The C language does not specify the range of integers that the
integral types will represent, except ot say that type int may not
be smaller than short and long may not be smaller than int.

They go on to say,

Many implementations represent characters in 8 bits, type short in
16 bits, and type long in 32 bits, with type int using either 16 or
32 bits depending on the implementation. ISO C requires
implementations to use at least these widths.

If the C language is not defined by ISO C, then what defines it?

There are requirements for conforming implementations, specified in
ISO

standard documents, such as those you quote above. There is some leeway in
these requirements, as illustrated above, presumably to allow appropriate
choices to be made when creating an implementation for some specific
platform.
What is in the quotes that makes you think it is not defined by ISO
C?


Looks to me like he's asking if ISO doesn't define this'n'that(integer
sizes etc.) Who does for a given arch/compiler ?


Prexactly. :)

I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
ra*********@sonyericsson.com, 919-472-1124
Nov 14 '05 #4

P: n/a
"Randy Yates"
"Nils O. Selåsdal" <NO*@Utel.no>
Alex Fraser wrote:
> "Randy Yates"
>In Harbison and Steele's text (fourth edition, p.111) [implementation of integer stuff] What is in the quotes that makes you think it is not defined by ISO
C?


Looks to me like he's asking if ISO doesn't define this'n'that(integer
sizes etc.) Who does for a given arch/compiler ?


Prexactly. :)

I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?


You are delusional if you think that the assiduous study of H&S won't reveal
a rich, flexible language that uses the ANSI/ISO standard as a bulwark. MPJ
Nov 14 '05 #5

P: n/a
Merrill & Michele wrote:
"Randy Yates"
"Nils O. Selåsdal" <NO*@Utel.no>

Alex Fraser wrote:

>"Randy Yates"
>In Harbison and Steele's text (fourth edition, p.111)
[implementation of integer stuff]
What is in the quotes that makes you think it is not defined by ISO
C?

Looks to me like he's asking if ISO doesn't define this'n'that(integer
sizes etc.) Who does for a given arch/compiler ?
Prexactly. :)

I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?


The standard demands CHAR_BIT>=8 which gives you no problem with
9-bit-bytes. Furthermore, the effective range of signed int/unsigned int
according to the standard is that of a 16 bit number (but for INT_MIN),
so sizeof(int)*CHAR_BIT==18 (or 32, nowadays) gives you no problem,
either. The same for 36-bit-longs.
IMO, H&S have that right. Or did I misunderstand your question, too?

BTW: The good old machines with 6 bits to a byte probably do not
have any C implementations to speak of (... I wait to be contradicted).
You are delusional if you think that the assiduous study of H&S won't reveal
a rich, flexible language that uses the ANSI/ISO standard as a bulwark. MPJ


Your point was... ? Do you want to encourage/contradict/... the OP?
Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Nov 14 '05 #6

P: n/a
In article <xx*************@usrts005.corpusers.net>,
Randy Yates <ra*********@sonyericsson.com> wrote:
I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?


Nine bits are not enough for the required ranges of integer types
such as short, int and long. It is, however, a perfectly valid
size for a byte, i.e. a char, unsigend char or signed char, in C.
A nine bit byte is only a strange size for people that view the
world through a PC.

--
Göran Larsson http://www.mitt-eget.com/
Nov 14 '05 #7

P: n/a
ho*@invalid.invalid (Goran Larsson) writes:
In article <xx*************@usrts005.corpusers.net>,
Randy Yates <ra*********@sonyericsson.com> wrote:
I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?


Nine bits are not enough for the required ranges of integer types
such as short, int and long.


What ranges are those? Where are they specified? This statement contradicts
H&S: "The C language does not specify the range of integers that the
integral types will represent...".
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
ra*********@sonyericsson.com, 919-472-1124
Nov 14 '05 #8

P: n/a

"Randy Yates" <ra*********@sonyericsson.com> wrote in message
news:xx*************@usrts005.corpusers.net...
"Nils O. Selåsdal" <NO*@Utel.no> writes: Prexactly. :)

I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?
Back in the Bad Old Days (tm) of bitslicers any number of bits was posible.
I recall a conversation with an "old pro" who told me about 47-bits
computers.
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
ra*********@sonyericsson.com, 919-472-1124

Nov 14 '05 #9

P: n/a
"Michael Mair"
Merrill & Michele wrote:
"Randy Yates"
"Nils O. Selåsdal" <NO*@Utel.no>

>Alex Fraser wrote:
>
>>"Randy Yates"
>>In Harbison and Steele's text (fourth edition, p.111)
[implementation of integer stuff]
>What is in the quotes that makes you think it is not defined by ISO
>C?

Looks to me like he's asking if ISO doesn't define this'n'that(integer
sizes etc.) Who does for a given arch/compiler ?

Prexactly. :)

I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?
The standard demands CHAR_BIT>=8 which gives you no problem with
9-bit-bytes. Furthermore, the effective range of signed int/unsigned int
according to the standard is that of a 16 bit number (but for INT_MIN),
so sizeof(int)*CHAR_BIT==18 (or 32, nowadays) gives you no problem,
either. The same for 36-bit-longs.
IMO, H&S have that right. Or did I misunderstand your question, too?

BTW: The good old machines with 6 bits to a byte probably do not
have any C implementations to speak of (... I wait to be contradicted).
You are delusional if you think that the assiduous study of H&S won't

reveal a rich, flexible language that uses the ANSI/ISO standard as a bulwark.

MPJ
Your point was... ? Do you want to encourage/contradict/... the OP?


The sentence stands by itself. The OP had internal contradiction. Motives
are OT. MPJ
Nov 14 '05 #10

P: n/a

"Goran Larsson" <ho*@invalid.invalid> wrote in message
news:I8********@approve.se...
In article <xx*************@usrts005.corpusers.net>,
Randy Yates <ra*********@sonyericsson.com> wrote:
I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?


Nine bits are not enough for the required ranges of integer types
such as short, int and long. It is, however, a perfectly valid
size for a byte, i.e. a char, unsigend char or signed char, in C.
A nine bit byte is only a strange size for people that view the
world through a PC.


What about a 62-bit byte? MPJ
Nov 14 '05 #11

P: n/a
"Merrill & Michele" <be********@comcast.net> writes:
[...]
The OP had internal contradiction.
What was the contradiction?
Motives are OT.


You must be kidding, right? I'm asking a very fundamental question
of how C is defined. Isn't that smack in the center of the topicality
for this group?
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
ra*********@sonyericsson.com, 919-472-1124
Nov 14 '05 #12

P: n/a
In article <I8********@approve.se>, Goran Larsson <ho*@invalid.invalid> wrote:
A nine bit byte is only a strange size for people that view the
world through a PC.


If you excise PCs from history, you will still find a trend towards
8-bit bytes. There will probably never be a new architecture with
9-bit bytes.

-- Richard
Nov 14 '05 #13

P: n/a
In <xx*************@usrts005.corpusers.net> Randy Yates <ra*********@sonyericsson.com> writes:
"Nils O. Selåsdal" <NO*@Utel.no> writes:
Looks to me like he's asking if ISO doesn't define this'n'that(integer
sizes etc.) Who does for a given arch/compiler ?
Prexactly. :)


The ISO standard delegates certain decisions to the implementor. Some
of them MUST be documented by the implementor, others need not be.

Whenever the ISO standard says: "this or that is implementation-defined",
the implementor must make a choice and document it. Otherwise, the
implementor is not required to document his choice, e.g. he is under
no obligation to reveal the exact definition of size_t.

Quite often, the standard specifies only limits, leaving the actual
choice to the implementor. For example, the character types must have
*at least* 8 bits, but the actual value is up to the implementor.
I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?


K&R1 mentions such an implementation. The only standard C types
suitable for 9-bit integers are the character types.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #14

P: n/a
In article <xx*************@usrts005.corpusers.net>,
Randy Yates <ra*********@sonyericsson.com> wrote:
The OP had internal contradiction.
What was the contradiction?


I don't know about a contradiction, but your question was rather
strange because you asked "If the C language is not defined by ISO C,
then what defines it?" without citing anything that said that
the C language was not defined by ISO C.

You quoted these paragraphs:

The C language does not specify the range of integers that the
integral types will represent, except ot say that type int may not
be smaller than short and long may not be smaller than int.

and

Many implementations represent characters in 8 bits, type short in
16 bits, and type long in 32 bits, with type int using either 16 or
32 bits depending on the implementation. ISO C requires
implementations to use at least these widths.

I suppose one could take them to mean "the C language does not require
types to be of those minimum sizes, but ISO C does", but I don't think
that is the intended meaning. If it *is* the intended meaning, then
it is presumably referring to pre-standard C.

-- Richard
Nov 14 '05 #15

P: n/a
In <I8********@approve.se> ho*@invalid.invalid (Goran Larsson) writes:
In article <xx*************@usrts005.corpusers.net>,
Randy Yates <ra*********@sonyericsson.com> wrote:
I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?
Nine bits are not enough for the required ranges of integer types
such as short, int and long.


Last time I checked, signed char was an integer type. Ditto about
unsigned char:

4 There are five standard signed integer types, designated as
signed char, short int, int, long int, and long long int.

6 For each of the signed integer types, there is a corresponding
(but different) unsigned integer type (designated with the keyword
unsigned) that uses the same amount of storage (including sign
information) and has the same alignment requirements.
It is, however, a perfectly valid
size for a byte, i.e. a char, unsigend char or signed char, in C.
Only unsigned char qualifies as "byte". The other character types may
ignore certain bits or combinations of bits in a byte.
A nine bit byte is only a strange size for people that view the
world through a PC.


Or through pretty much anything else in current use today: Unix
workstations, supercomputers, SCSI and IDE disks, TCP/IP networks and
the underlying networking hardware, USB devices. Ditto for most of
the open source software in use today. Our current hosted computing
world revolves around 8-bit bytes and there is no indication that
this is going to change anytime in the future.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #16

P: n/a
"Randy Yates" <ra*********@sonyericsson.com> wrote in message
news:xx*************@usrts005.corpusers.net...
ho*@invalid.invalid (Goran Larsson) writes:
Nine bits are not enough for the required ranges of integer types
such as short, int and long.


What ranges are those? Where are they specified? This statement
contradicts H&S: "The C language does not specify the range of integers
that the integral types will represent...".


No, it does not contradict H&S: the (exact) ranges are not specified, but
"minimum" (ie smallest absolute value) ranges are. These minimum ranges are:

Type Minimum Name Maximum Name
signed char <= -127 SCHAR_MIN >= 127 SCHAR_MAX
unsigned char 0 >= 255 UCHAR_MAX
(signed) short (int) <= -32767 SHRT_MIN >= 32767 SHRT_MAX
unsigned short (int) 0 >= 65535 USHRT_MAX
signed/(signed) int <= -32767 INT_MIN >= 32767 INT_MAX
unsigned (int) 0 >= 65535 UINT_MAX
(signed) long (int) <= -2147483647 LONG_MIN >= 2147483647 LONG_MAX
unsigned long (int) 0 >= 4294967295 ULONG_MAX

The Name columns refer to constants defined in <limits.h> containing the
actual value(s) for the corresponding type in a given implementation.

Type 'char' is either the same as 'signed char' or the same as 'unsigned
char'; CHAR_MIN (equal to 0 or SCHAR_MIN) and CHAR_MAX (equal to SCHAR_MAX
or UCHAR_MAX) describe its range.

The above implies that type char must be at least 8 bits, types short and
int at least 16 bits, and type long at least 32 bits. Hence the statement
ending the second H&S quote you gave: "ISO C requires implementations to use
at least these widths."

C99 extended the above list by introducing '(signed) long long (int)' and
'unsigned long long (int)' types and <limits.h> constants: LLONG_MIN
<= -(2^63 - 1), LLONG_MAX >= 2^63 - 1, and ULLONG_MAX >= 2^64 - 1.

Alex
Nov 14 '05 #17

P: n/a
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <xx*************@usrts005.corpusers.net>,
Randy Yates <ra*********@sonyericsson.com> wrote:
The OP had internal contradiction.
What was the contradiction?


I don't know about a contradiction, but your question was rather
strange because you asked "If the C language is not defined by ISO C,
then what defines it?" without citing anything that said that
the C language was not defined by ISO C.


I thought I had, by the paragraphs I cited and you repeated below:
You quoted these paragraphs:

The C language does not specify the range of integers that the
integral types will represent, except ot say that type int may not
be smaller than short and long may not be smaller than int.

and

Many implementations represent characters in 8 bits, type short in
16 bits, and type long in 32 bits, with type int using either 16 or
32 bits depending on the implementation. ISO C requires
implementations to use at least these widths.

I suppose one could take them to mean "the C language does not require
types to be of those minimum sizes, but ISO C does",
Yes, that is how I interpreted the statements.
but I don't think
that is the intended meaning.
How could it be otherwise? First the authors state that the language
does not specify the range of integers, then they state that ISO C
requires a minimum width, implying a minimum range. The two statements
can't be both true for the same standard.
If it *is* the intended meaning, then
it is presumably referring to pre-standard C.


That is exactly how I was interpreting it, but what defines
"pre-standard C"???
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
ra*********@sonyericsson.com, 919-472-1124
Nov 14 '05 #18

P: n/a
In article <xx*************@usrts005.corpusers.net>,
Randy Yates <ra*********@sonyericsson.com> wrote:
I suppose one could take them to mean "the C language does not require
types to be of those minimum sizes, but ISO C does",
Yes, that is how I interpreted the statements. but I don't think
that is the intended meaning. How could it be otherwise?


I took the second to just be a completion of the first.

But supposing you're right:
If it *is* the intended meaning, then
it is presumably referring to pre-standard C.


That is exactly how I was interpreting it, but what defines
"pre-standard C"???


The consensus represented by K&R 1 and the compilers that existed
before the first ANSI / ISO standard. I don't have a K&R 1 to hand to
check what they said.

-- Richard
Nov 14 '05 #19

P: n/a
In <31*************@individual.net> Michael Mair <Mi**********@invalid.invalid> writes:
BTW: The good old machines with 6 bits to a byte probably do not
have any C implementations to speak of (... I wait to be contradicted).


On those machines, 6 bits was one of the available options for the size
of a byte, rather than the hardwired size of a byte. They were word
addressed machines with word sizes of 12, 18 or 36 bits. At least one
such machine, the PDP-10, had K&R C implemented on it, but I guess that
it used a larger byte size.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #20

P: n/a
On 02 Dec 2004 09:55:31 -0500, Randy Yates
<ra*********@sonyericsson.com> wrote:
ho*@invalid.invalid (Goran Larsson) writes:
In article <xx*************@usrts005.corpusers.net>,
Randy Yates <ra*********@sonyericsson.com> wrote:
> I seem to recall conversations in years past of old machines that
> had strange integer sizes (9 bits?) which C would support. Am I
> delusional?


Nine bits are not enough for the required ranges of integer types
such as short, int and long.


What ranges are those? Where are they specified? This statement contradicts
H&S: "The C language does not specify the range of integers that the
integral types will represent...".


Section 5.2.4.2 in the C99 spec. gives the minimum ranges that a
conforming implementation must support, in the form of the minimum
values for the macros in limits.h. The C89 spec. had something similar.
There are no upper bounds on the magnitudes representable by each type,
however (so for instance an unsigned char is at least 8 bits but could
be 9, 12, 32 or more). There is also a relationship such that:

bits(char) <= bits(short) <= bits(int) <= bits(long) <= bits(long long)

(bits(<t>) meaning the number of significant bits in type <t>) and the
same for unsigned types (e.g. you can't have a char which can't be
represented in a short or a long with fewer bits than an int, although
they could all be the same size and are for some processors).

Chris C
Nov 14 '05 #21

P: n/a

"Merrill & Michele" <be********@comcast.net> wrote in message
news:E-********************@comcast.com...

"Goran Larsson" <ho*@invalid.invalid> wrote in message
news:I8********@approve.se...
In article <xx*************@usrts005.corpusers.net>,
Randy Yates <ra*********@sonyericsson.com> wrote:
I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?


Nine bits are not enough for the required ranges of integer types
such as short, int and long. It is, however, a perfectly valid
size for a byte, i.e. a char, unsigend char or signed char, in C.
A nine bit byte is only a strange size for people that view the
world through a PC.


What about a 62-bit byte? MPJ


Perfectly valid for C.

-Mike
Nov 14 '05 #22

P: n/a
Randy Yates wrote:
"Nils O. Selåsdal" <NO*@Utel.no> writes:

.... snip ...

Looks to me like he's asking if ISO doesn't define this'n'that
(integer sizes etc.) Who does for a given arch/compiler ?


Prexactly. :)

I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?


Which is why the existance of limits.h is prescribed, so that the
user can know the exact characteristics of the system on which he
is running.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #23

P: n/a
Michael Mair wrote:
Merrill & Michele wrote:
.... snip ...
You are delusional if you think that the assiduous study of H&S
won't reveal a rich, flexible language that uses the ANSI/ISO
standard as a bulwark. MPJ


Your point was... ? Do you want to encourage/contradict/... the OP?


I am getting rather tired of his snide remarks, which appear to be
largely for the purpose of hearing himself.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #24

P: n/a
Randy Yates wrote:
ho*@invalid.invalid (Goran Larsson) writes:
Randy Yates <ra*********@sonyericsson.com> wrote:
I seem to recall conversations in years past of old machines that
had strange integer sizes (9 bits?) which C would support. Am I
delusional?
Nine bits are not enough for the required ranges of integer types
such as short, int and long.


What ranges are those? Where are they specified? This statement
contradicts H&S: "The C language does not specify the range of
integers that the integral types will represent...".


Just #include <limits.h> in your program, and all the critical
values will be available. Some others are in <float.h>.
From N869:


5.2.4.2 Numerical limits

[#1] A conforming implementation shall document all the
limits specified in this subclause, which are specified in
the headers <limits.h> and <float.h>. Additional limits are
specified in <stdint.h>.

5.2.4.2.1 Sizes of integer types <limits.h>

[#1] The values given below shall be replaced by constant
expressions suitable for use in #if preprocessing
directives. Moreover, except for CHAR_BIT and MB_LEN_MAX,
the following shall be replaced by expressions that have the
same type as would an expression that is an object of the
corresponding type converted according to the integer
promotions. Their implementation-defined values shall be
equal or greater in magnitude (absolute value) to those
shown, with the same sign.

-- number of bits for smallest object that is not a bit-
field (byte)
CHAR_BIT 8

-- minimum value for an object of type signed char
SCHAR_MIN -127 // -(27-1)

-- maximum value for an object of type signed char
SCHAR_MAX +127 // 27-1

-- maximum value for an object of type unsigned char
UCHAR_MAX 255 // 28-1

-- minimum value for an object of type char
CHAR_MIN see below

-- maximum value for an object of type char
CHAR_MAX see below

-- maximum number of bytes in a multibyte character, for
any supported locale
MB_LEN_MAX 1

-- minimum value for an object of type short int
SHRT_MIN -32767 // -(215-1)

-- maximum value for an object of type short int
SHRT_MAX +32767 // 215-1

..... and so on ....
--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #25

P: n/a
In article <co**********@sunnews.cern.ch>, Dan Pop <Da*****@cern.ch> wrote:
In <I8********@approve.se> ho*@invalid.invalid (Goran Larsson) writes:

Nine bits are not enough for the required ranges of integer types
such as short, int and long.


Last time I checked, signed char was an integer type. Ditto about
unsigned char:


I wrote that nine bit was not enough for integer types such as short,
int and long. I never wrote that these three integer types were the
only integer types available. It was just a list of a few integer types
that requires more than nine bits.

--
Göran Larsson http://www.mitt-eget.com/
Nov 14 '05 #26

P: n/a
Randy Yates wrote:

What ranges are those? Where are they specified? This statement
contradicts H&S: "The C language does not specify the range of
integers that the integral types will represent...".


It doesn't specifies exact ranges. It specifies a minimal range. The
standard says something like (from N869):

``plain'' int object has the natural size suggested by the
architecture of the execution environment (large enough to
contain any value in the range INT_MIN to INT_MAX as defined
in the header <limits.h>).

The actual range is up to the implementation, as long as it satisfies
the requirement above.

Brian
Nov 14 '05 #27

P: n/a
"Alex Fraser" <me@privacy.net> writes:
"Randy Yates" <ra*********@sonyericsson.com> wrote in message
news:xx*************@usrts005.corpusers.net...
ho*@invalid.invalid (Goran Larsson) writes:
Nine bits are not enough for the required ranges of integer types
such as short, int and long.


What ranges are those? Where are they specified? This statement
contradicts H&S: "The C language does not specify the range of integers
that the integral types will represent...".


No, it does not contradict H&S: the (exact) ranges are not specified, but
"minimum" (ie smallest absolute value) ranges are.


We're getting into semantic issues. I take the statement "the ranges
are not specified" to mean that they are free to assume any value. They are
not.
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
ra*********@sonyericsson.com, 919-472-1124
Nov 14 '05 #28

P: n/a
Randy Yates <ra*********@sonyericsson.com> scribbled the following:
"Alex Fraser" <me@privacy.net> writes:
"Randy Yates" <ra*********@sonyericsson.com> wrote in message
news:xx*************@usrts005.corpusers.net...
> ho*@invalid.invalid (Goran Larsson) writes:
> > Nine bits are not enough for the required ranges of integer types
> > such as short, int and long.
>
> What ranges are those? Where are they specified? This statement
> contradicts H&S: "The C language does not specify the range of integers
> that the integral types will represent...".
No, it does not contradict H&S: the (exact) ranges are not specified, but
"minimum" (ie smallest absolute value) ranges are.

We're getting into semantic issues. I take the statement "the ranges
are not specified" to mean that they are free to assume any value. They are
not.


I don't see what is the problem. The ISO C standard specifies the
minimum requirements for the ranges, i.e. the largest possible lower
bound and the smallest possible upper bound. The implementation is then
free to specify any specific range it wants to, as long as it keeps
within these requirements. Isn't this what everyone has been saying all
along?

--
/-- Joona Palaste (pa*****@cc.helsinki.fi) ------------- Finland --------\
\-------------------------------------------------------- rules! --------/
"Insanity is to be shared."
- Tailgunner
Nov 14 '05 #29

P: n/a
Joona I Palaste <pa*****@cc.helsinki.fi> writes:
Randy Yates <ra*********@sonyericsson.com> scribbled the following:
"Alex Fraser" <me@privacy.net> writes:
"Randy Yates" <ra*********@sonyericsson.com> wrote in message
news:xx*************@usrts005.corpusers.net...
> ho*@invalid.invalid (Goran Larsson) writes:
> > Nine bits are not enough for the required ranges of integer types
> > such as short, int and long.
>
> What ranges are those? Where are they specified? This statement
> contradicts H&S: "The C language does not specify the range of integers
> that the integral types will represent...".

No, it does not contradict H&S: the (exact) ranges are not specified, but
"minimum" (ie smallest absolute value) ranges are.

We're getting into semantic issues. I take the statement "the ranges
are not specified" to mean that they are free to assume any value. They are
not.


I don't see what is the problem. The ISO C standard specifies the
minimum requirements for the ranges, i.e. the largest possible lower
bound and the smallest possible upper bound. The implementation is then
free to specify any specific range it wants to, as long as it keeps
within these requirements. Isn't this what everyone has been saying all
along?


No. Once again, H&S say

The C language does not specify the range of integers that the
integral types will represent...".

That statement is not correct. The correct statement would be

The C language partially specifies the range of integers that the
integral types will represent by establishing minimums for those
ranges. The actual ranges are implementation-defined subject to
these minimum range constraints.

--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
ra*********@sonyericsson.com, 919-472-1124
Nov 14 '05 #30

P: n/a
"Mike Wahler" >
"Merrill & Michele"
"Goran Larsson"
> Randy Yates :

> I seem to recall conversations in years past of old machines that
> had strange integer sizes (9 bits?) which C would support. Am I
> delusional?

Nine bits are not enough for the required ranges of integer types
such as short, int and long. It is, however, a perfectly valid
size for a byte, i.e. a char, unsigend char or signed char, in C.
A nine bit byte is only a strange size for people that view the
world through a PC.


What about a 62-bit byte? MPJ


Perfectly valid for C.


No argument there. I just think there needs to be something said advocating
the width of a byte in bits as the order of a Boolean algebra. MPJ
Nov 14 '05 #31

P: n/a
Randy Yates <ra*********@sonyericsson.com> writes:
Joona I Palaste <pa*****@cc.helsinki.fi> writes:
Randy Yates <ra*********@sonyericsson.com> scribbled the following:
We're getting into semantic issues. I take the statement "the ranges
are not specified" to mean that they are free to assume any value.
This is where you went wrong. See below.

They are not.


I don't see what is the problem. The ISO C standard specifies the
minimum requirements for the ranges, i.e. the largest possible lower
bound and the smallest possible upper bound. The implementation is then
free to specify any specific range it wants to, as long as it keeps
within these requirements. Isn't this what everyone has been saying all
along?


No. Once again, H&S say

The C language does not specify the range of integers that the
integral types will represent...".

That statement is not correct. [...snip...]


The online Merriam Websters dictionary gives this definition for
specify:

to name or state explicitly or in detail

Clearly the standard doesn't do that. The C language *constrains* the
ranges of integers that integral types may represent, but it does not
*specify* the range of integers that integral types will represent.
Nov 14 '05 #32

P: n/a
Randy Yates <ra*********@sonyericsson.com> wrote:
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <xx*************@usrts005.corpusers.net>,
Randy Yates <ra*********@sonyericsson.com> wrote:
>> The OP had internal contradiction.

>What was the contradiction?


I don't know about a contradiction, but your question was rather
strange because you asked "If the C language is not defined by ISO C,
then what defines it?" without citing anything that said that
the C language was not defined by ISO C.


I thought I had, by the paragraphs I cited and you repeated below:
You quoted these paragraphs:

The C language does not specify the range of integers that the
integral types will represent, except ot say that type int may not
be smaller than short and long may not be smaller than int.

and

Many implementations represent characters in 8 bits, type short in
16 bits, and type long in 32 bits, with type int using either 16 or
32 bits depending on the implementation. ISO C requires
implementations to use at least these widths.

I suppose one could take them to mean "the C language does not require
types to be of those minimum sizes, but ISO C does",


Yes, that is how I interpreted the statements.
but I don't think
that is the intended meaning.


How could it be otherwise? First the authors state that the language
does not specify the range of integers, then they state that ISO C
requires a minimum width, implying a minimum range. The two statements
can't be both true for the same standard.


Yes, they can. ISO C does not specify *the* range of integers, but it
does specify *a* *minimum* range of integers. Not quite the same thing.
--
<Insert your favourite quote here.>
Erik Trulsson
er******@student.uu.se
Nov 14 '05 #33

P: n/a
Erik Trulsson <er******@student.uu.se> writes:
Randy Yates <ra*********@sonyericsson.com> wrote:
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <xx*************@usrts005.corpusers.net>,
Randy Yates <ra*********@sonyericsson.com> wrote:

>> The OP had internal contradiction.

>What was the contradiction?

I don't know about a contradiction, but your question was rather
strange because you asked "If the C language is not defined by ISO C,
then what defines it?" without citing anything that said that
the C language was not defined by ISO C.


I thought I had, by the paragraphs I cited and you repeated below:
You quoted these paragraphs:

The C language does not specify the range of integers that the
integral types will represent, except ot say that type int may not
be smaller than short and long may not be smaller than int.

and

Many implementations represent characters in 8 bits, type short in
16 bits, and type long in 32 bits, with type int using either 16 or
32 bits depending on the implementation. ISO C requires
implementations to use at least these widths.

I suppose one could take them to mean "the C language does not require
types to be of those minimum sizes, but ISO C does",


Yes, that is how I interpreted the statements.
but I don't think
that is the intended meaning.


How could it be otherwise? First the authors state that the language
does not specify the range of integers, then they state that ISO C
requires a minimum width, implying a minimum range. The two statements
can't be both true for the same standard.


Yes, they can. ISO C does not specify *the* range of integers, but it
does specify *a* *minimum* range of integers. Not quite the same thing.


An architect is designing a house for you. She asks you, "How large shall
I make the master bedroom." You reply, "I leave it unspecified." So she
makes it 10x15. Then you say, "That's too small."

Better to follow the advice of Quintilian:

One should not aim at being possible to understand, but at being
impossible to misunderstand.

--RY

--
% Randy Yates % "How's life on earth?
%% Fuquay-Varina, NC % ... What is it worth?"
%%% 919-577-9882 % 'Mission (A World Record)',
%%%% <ya***@ieee.org> % *A New World Record*, ELO
http://home.earthlink.net/~yatescr
Nov 14 '05 #34

P: n/a
In article <41******@falcon.midgard.homeip.net>,
Erik Trulsson <er******@student.uu.se> wrote:
The C language does not specify the range of integers that the
integral types will represent, except ot say that type int may not
be smaller than short and long may not be smaller than int. Many implementations represent characters in 8 bits, type short in
16 bits, and type long in 32 bits, with type int using either 16 or
32 bits depending on the implementation. ISO C requires
implementations to use at least these widths.
First the authors state that the language
does not specify the range of integers, then they state that ISO C
requires a minimum width, implying a minimum range. The two statements
can't be both true for the same standard.
Yes, they can. ISO C does not specify *the* range of integers, but it
does specify *a* *minimum* range of integers. Not quite the same thing.


The first paragraph says that the *only* thing the C language
specifies about the range of integers is that short <= int <= long.
The second says that ISO C specifies short >= 16, int >= 16, long >= 32.

Taken literally and separately, the two paragraphs are inconsistent
if "the C language" == "ISO C".

I don't have the book so I don't know whether they're really meant to
be taken separately like that. But it's now clear why the OP saw a
contradiction.

-- Richard
Nov 14 '05 #35

P: n/a
Randy Yates <ra*********@sonyericsson.com> writes:
In Harbison and Steele's text (fourth edition, p.111)
it is stated,

The C language does not specify the range of integers that the
integral types will represent, except ot say that type int may not
be smaller than short and long may not be smaller than int.

They go on to say,

Many implementations represent characters in 8 bits, type short in
16 bits, and type long in 32 bits, with type int using either 16 or
32 bits depending on the implementation. ISO C requires
implementations to use at least these widths.

If the C language is not defined by ISO C, then what defines it?


The C language is defined by the ISO standard. It is not defined by
H&S, though it's a very good reference. In this case, I think that
paragraph in the 4th edition was poorly worded. There's no power
struggle between H&S and ISO, just a mistake.

The corresponding paragraph in the 5th edition says:

Standard C specifies the minimum precision for most integer types.
Type char must be at least 8 bits wide, type short at least 16
bits wide, type long at least 32 bits wide, and type long long at
least 64 bits wide. (That is, C99 requires 64-bit integer types
and the full set of 64-bit arithmetic operations.) The actual
ranges of the integer types are recorded in limits.h.

The error in H&S4 may date back to K&R1 (1978, pre-ISO), which doesn't
require specific minimum ranges. All the examples shown in K&R1
satisfy the ISO minima, but a system with 6-bit char, 12-bit short and
int, and 24-bit long would have been legal.

The intent is that short and long should provide different lengths
of integers where practical; int will normally reflect the most
"natural" size for a particular machine. As you can see, each
compiler is free to interpret short and long as appropriate for
its own hardware. About all you should count on is that short is
no longer than long.

-- K&R1 2.2, page 34

C89/C90 imposed the stricter requirements we enjoy today.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #36

P: n/a
On 02 Dec 2004 14:50:40 -0500, Randy Yates
<ra*********@sonyericsson.com> wrote in comp.lang.c:
Joona I Palaste <pa*****@cc.helsinki.fi> writes:
Randy Yates <ra*********@sonyericsson.com> scribbled the following:
"Alex Fraser" <me@privacy.net> writes:
> "Randy Yates" <ra*********@sonyericsson.com> wrote in message
> news:xx*************@usrts005.corpusers.net...
> > ho*@invalid.invalid (Goran Larsson) writes:
> > > Nine bits are not enough for the required ranges of integer types
> > > such as short, int and long.
> >
> > What ranges are those? Where are they specified? This statement
> > contradicts H&S: "The C language does not specify the range of integers
> > that the integral types will represent...".
>
> No, it does not contradict H&S: the (exact) ranges are not specified, but
> "minimum" (ie smallest absolute value) ranges are.

We're getting into semantic issues. I take the statement "the ranges
are not specified" to mean that they are free to assume any value. They are
not.


I don't see what is the problem. The ISO C standard specifies the
minimum requirements for the ranges, i.e. the largest possible lower
bound and the smallest possible upper bound. The implementation is then
free to specify any specific range it wants to, as long as it keeps
within these requirements. Isn't this what everyone has been saying all
along?


No. Once again, H&S say

The C language does not specify the range of integers that the
integral types will represent...".

That statement is not correct. The correct statement would be

The C language partially specifies the range of integers that the
integral types will represent by establishing minimums for those
ranges. The actual ranges are implementation-defined subject to
these minimum range constraints.


Several people have explained to you what the C language does specify.
Apparently you think that the wording of the first phrase you quoted
from H&S is incorrect. That is not a problem for either the C
language or the C standard.

If you don't like their wording, why don't you take it up with them
and stop quibbling over a non-language issue here?

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html
Nov 14 '05 #37

P: n/a
Erik Trulsson <er******@student.uu.se> writes:
[...]
Yes, they can. ISO C does not specify *the* range of integers, but it
does specify *a* *minimum* range of integers. Not quite the same thing.


Sure, but look at the statement in H&S 4ed:

The C language does not specify the range of integers that the
integral types will represent, except to say that type int may not
be smaller than short and long may not be smaller than int.

This clearly (and incorrectly) states that the short<=int and
int<=long restrictions are the only thing the language specifies about
the ranges of the integral types. In fact, the standard specifies
these restrictions *plus* certain minimum ranges. The rest of the
paragraph mentions that ISO C specifies minima, contradicting the
initial statement.

It's a mistake in the 4th edition, corrected in the 5th edition.
It's not a huge deal, and it's not worth going through contortions to
demonstrate that their statement is consistent with the standard.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 14 '05 #38

P: n/a
Keith Thompson <ks***@mib.org> writes:
Randy Yates <ra*********@sonyericsson.com> writes:
In Harbison and Steele's text (fourth edition, p.111)
it is stated,

The C language does not specify the range of integers that the
integral types will represent, except ot say that type int may not
be smaller than short and long may not be smaller than int.

They go on to say,

Many implementations represent characters in 8 bits, type short in
16 bits, and type long in 32 bits, with type int using either 16 or
32 bits depending on the implementation. ISO C requires
implementations to use at least these widths.

If the C language is not defined by ISO C, then what defines it?


The C language is defined by the ISO standard. It is not defined by
H&S, though it's a very good reference. In this case, I think that
paragraph in the 4th edition was poorly worded. There's no power
struggle between H&S and ISO, just a mistake.

The corresponding paragraph in the 5th edition says:

Standard C specifies the minimum precision for most integer types.
Type char must be at least 8 bits wide, type short at least 16
bits wide, type long at least 32 bits wide, and type long long at
least 64 bits wide. (That is, C99 requires 64-bit integer types
and the full set of 64-bit arithmetic operations.) The actual
ranges of the integer types are recorded in limits.h.

The error in H&S4 may date back to K&R1 (1978, pre-ISO), which doesn't
require specific minimum ranges. All the examples shown in K&R1
satisfy the ISO minima, but a system with 6-bit char, 12-bit short and
int, and 24-bit long would have been legal.

The intent is that short and long should provide different lengths
of integers where practical; int will normally reflect the most
"natural" size for a particular machine. As you can see, each
compiler is free to interpret short and long as appropriate for
its own hardware. About all you should count on is that short is
no longer than long.

-- K&R1 2.2, page 34

C89/C90 imposed the stricter requirements we enjoy today.


Thanks for this excellent historical summary and answer to my
question, Keith.
--
% Randy Yates % "Rollin' and riding and slippin' and
%% Fuquay-Varina, NC % sliding, it's magic."
%%% 919-577-9882 %
%%%% <ya***@ieee.org> % 'Living' Thing', *A New World Record*, ELO
http://home.earthlink.net/~yatescr
Nov 14 '05 #39

P: n/a
Jack Klein <ja*******@spamcop.net> writes:
[...]
Several people have explained to you what the C language does specify.
Apparently you think that the wording of the first phrase you quoted
from H&S is incorrect. That is not a problem for either the C
language or the C standard.

If you don't like their wording, why don't you take it up with them
and stop quibbling over a non-language issue here?


Jack,

This is not only a question of H&S's wording, nor of what the ISO
standard says, but also of what actually defines the language, now and
in the past. This has not at all been clearly explained to my
satisfaction by anyone until Keith's recent post.

In my opinion, such matters are far from a quibble and clearly on-topic.
--
% Randy Yates % "My Shangri-la has gone away, fading like
%% Fuquay-Varina, NC % the Beatles on 'Hey Jude'"
%%% 919-577-9882 %
%%%% <ya***@ieee.org> % 'Shangri-La', *A New World Record*, ELO
http://home.earthlink.net/~yatescr
Nov 14 '05 #40

P: n/a
On Thu, 02 Dec 2004 23:52:03 GMT, Randy Yates <ya***@ieee.org> wrote:
An architect is designing a house for you. She asks you, "How large shall
I make the master bedroom." You reply, "I leave it unspecified." So she
makes it 10x15. Then you say, "That's too small."

Local regulations may give guidelines for architects to design houses
without specifying the exact dimensions of each. But for safety or
other considerations may specify the master bedrooms as not less than
10x15 feet.

For bed makers, carpet makers and so on that's more useful - and
practical - than bedrooms that could be 1x1 feet upwards.

Similarly for small, medium and large bedrooms, without giving actual
sizes, regs can specify small<=medium<=large.

Bart C
Nov 14 '05 #41

P: n/a
In <y8**********@ieee.org> Randy Yates <ya***@ieee.org> writes:
Erik Trulsson <er******@student.uu.se> writes:
Yes, they can. ISO C does not specify *the* range of integers, but it
does specify *a* *minimum* range of integers. Not quite the same thing.
An architect is designing a house for you. She asks you, "How large shall
I make the master bedroom." You reply, "I leave it unspecified." So she
makes it 10x15. Then you say, "That's too small."


OTOH, if you reply: "it shouldn't be smaller than 12x16", do you call this
a "specification" or a "constraint"? The C standard does not specify the
ranges of the integer types, it only imposes certain constraints on them.
Better to follow the advice of Quintilian:

One should not aim at being possible to understand, but at being
impossible to misunderstand.


With certain people (e.g. Mark McIntyre) this is impossible to achieve.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #42

P: n/a
On Thu, 2 Dec 2004 09:49:01 -0600, "Merrill & Michele"
<be********@comcast.net> wrote:
What about a 62-bit byte? MPJ


Unlikely. I started programming on a 36-bit word addressed machine. To
handle text efficiently meant packing 5x7-bit chars to a word and was
a nightmare. I think one old machine (CDC 6600?) used 60-bit words but
in those days text processing was second to numeric work.

An 8-bit byte was a perfect, symmetric size to handle 7-bit ASCII
text, for scaling to 16 and 32 bits, or packing BCD. I think a 'byte'
should mean 8-bits whatever happens in future.

Future machines I think will either stay 8-bit addressable or change
to 16-bits addressable for character handling (ASCII or Unicode)
(physical memory is already 64-bits addressable).

For a low-level language like C it should be possible to know when
programming how many bits you are dealing with.

I think the designers of the standard had in mind being able to expand
ints from 32 to 64 bits or chars from 8 to 16 bits when they left the
word-lengths undefined, nothing weird, and perhaps to keep C usable on
small microprocessors with odd word sizes.

Bart C.

Nov 14 '05 #43

P: n/a

"Bart" <48**@freeuk.com> wrote in message
news:71********************************@4ax.com...
On Thu, 2 Dec 2004 09:49:01 -0600, "Merrill & Michele"
<be********@comcast.net> wrote:

An 8-bit byte was a perfect, symmetric size to handle 7-bit ASCII
text, for scaling to 16 and 32 bits, or packing BCD. I think a 'byte'
should mean 8-bits whatever happens in future. If you explicitly mean 8-bits then use "octet."

Future machines I think will either stay 8-bit addressable or change
to 16-bits addressable for character handling (ASCII or Unicode)
(physical memory is already 64-bits addressable).

You are implying that all current machine use 8-bit bytes. There are more
computers out there than just PCs. Ever had to program a DSP?

DrX
Nov 14 '05 #44

P: n/a
In article <co*********@cui1.lmms.lmco.com>,
Xenos <do**********@spamhate.com> wrote:
You are implying that all current machine use 8-bit bytes. There are more
computers out there than just PCs. Ever had to program a DSP?


This is true, but you exaggerate: you don't have to restrict yourself
to PCs to only encounter 8-bit bytes.

Sometimes I think it would be useful to document a "profile" of C that
applies to reasonable modern general-purpose computers: flat address
space, no trap representations, 2s complement arithmetic, power-of-2
byte and short/int/long sizes, ascii-superset and so on. This would
provide an agreed target for the wide range of C programs which are
strictly speaking non-conforming but in practice portable.

-- Richard
Nov 14 '05 #45

P: n/a

"Richard Tobin" <ri*****@cogsci.ed.ac.uk> wrote in message
news:co***********@pc-news.cogsci.ed.ac.uk...
In article <co*********@cui1.lmms.lmco.com>,
Xenos <do**********@spamhate.com> wrote:
You are implying that all current machine use 8-bit bytes. There are morecomputers out there than just PCs. Ever had to program a DSP?


This is true, but you exaggerate: you don't have to restrict yourself
to PCs to only encounter 8-bit bytes.

I didn't intend to exaggerate, nor imply that there weren't 8-bit processors
outside of the PC realm. My only point was that many programmers seem to
(in my opinion) to have a pc-centric view, and don't realize that there are
a lot of systems that do not conform to that architecture. I don't see how
you can infer from my previous statement that I thought 8-bit processors
were exclusive to the PC.

DrX
Nov 14 '05 #46

P: n/a
In <co***********@pc-news.cogsci.ed.ac.uk> ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
Sometimes I think it would be useful to document a "profile" of C that
applies to reasonable modern general-purpose computers: flat address
space, no trap representations, 2s complement arithmetic, power-of-2
byte and short/int/long sizes, ascii-superset and so on. This would
provide an agreed target for the wide range of C programs which are
strictly speaking non-conforming but in practice portable.


This is what happens de facto in the open source computing world.
You won't find many software packages that work on machines with
non-octet bytes, 16-bit int's or non-linear address spaces. So, it is
not realistical to expect any vendor to come up with a general purpose
computer breaking any of the quasi-universal assumptions. It has
always been hard to sell computers without a serious software base,
today it's harder than ever.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #47

P: n/a
In <co*********@cui1.lmms.lmco.com> "Xenos" <do**********@spamhate.com> writes:

"Richard Tobin" <ri*****@cogsci.ed.ac.uk> wrote in message
news:co***********@pc-news.cogsci.ed.ac.uk...
In article <co*********@cui1.lmms.lmco.com>,
Xenos <do**********@spamhate.com> wrote:
>You are implying that all current machine use 8-bit bytes. There are ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
more

^^^^ >computers out there than just PCs. Ever had to program a DSP?

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is true, but you exaggerate: you don't have to restrict yourself
to PCs to only encounter 8-bit bytes.

I didn't intend to exaggerate, nor imply that there weren't 8-bit processors
outside of the PC realm. My only point was that many programmers seem to
(in my opinion) to have a pc-centric view, and don't realize that there are
a lot of systems that do not conform to that architecture. I don't see how
you can infer from my previous statement that I thought 8-bit processors
were exclusive to the PC.


The implication is straightforward, if you reread your first two sentences
above. You're strongly suggesting that computers that aren't PC's don't
use 8-bit bytes.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #48

P: n/a
In <co*********@cui1.lmms.lmco.com> "Xenos" <do**********@spamhate.com> writes:

"Bart" <48**@freeuk.com> wrote in message
news:71********************************@4ax.com.. .
On Thu, 2 Dec 2004 09:49:01 -0600, "Merrill & Michele"
<be********@comcast.net> wrote:

An 8-bit byte was a perfect, symmetric size to handle 7-bit ASCII
text, for scaling to 16 and 32 bits, or packing BCD. I think a 'byte'
should mean 8-bits whatever happens in future.

If you explicitly mean 8-bits then use "octet."

Future machines I think will either stay 8-bit addressable or change
to 16-bits addressable for character handling (ASCII or Unicode)
(physical memory is already 64-bits addressable).

You are implying that all current machine use 8-bit bytes. There are more
computers out there than just PCs. Ever had to program a DSP?


He was merely talking about machines designed for hosted computing.

Freestanding computing on processors designed for embedded control
applications is a world of its own, that shouldn't be mixed up with
general purpose computing.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
Currently looking for a job in the European Union
Nov 14 '05 #49

P: n/a

"Dan Pop" <Da*****@cern.ch> wrote in message
news:co**********@sunnews.cern.ch...
In <co*********@cui1.lmms.lmco.com> "Xenos" <do**********@spamhate.com> writes:
"Richard Tobin" <ri*****@cogsci.ed.ac.uk> wrote in message
news:co***********@pc-news.cogsci.ed.ac.uk...
In article <co*********@cui1.lmms.lmco.com>,
Xenos <do**********@spamhate.com> wrote:

>You are implying that all current machine use 8-bit bytes. There are ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
more

^^^^ >computers out there than just PCs. Ever had to program a DSP?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The implication is straightforward, if you reread your first two sentences
above. You're strongly suggesting that computers that aren't PC's don't
use 8-bit bytes.

I honestly don't see the inference and it definitely was not my intent, but
I will endeavor to be more clear in the future.

Thanks.
Nov 14 '05 #50

86 Replies

This discussion thread is closed

Replies have been disabled for this discussion.