473,385 Members | 1,311 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

how can i generate warnings for implicit casts that lose bits?

here is a post i put out (using Google Groups) that got dropped by
google:

i am using gcc as so:
$ gcc -v
Using built-in specs.
Target: i386-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --
infodir=/usr/share/info --enable-shared --enable-threads=posix --
enable-checking=release --with-system-zlib --enable-__cxa_atexit --
disable-libunwind-exceptions --enable-libgcj-multifile --enable-
languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --
disable-dssi --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
--with-cpu=generic --host=i386-redhat-linux
Thread model: posix
gcc version 4.1.1 20060525 (Red Hat 4.1.1-1)

and have compiled a simple test program (FILE: hello.c):

//
// $ gcc -Wconversion -o hello hello.c
// $ hello
//

#include <stdio.h>
main()
{
unsigned long a_ulong = 0; // 32 bit
short a_short_array[128]; // 16 bit each

a_ulong = 1234567;

a_short_array[26] = a_ulong;

printf("%d, %hx, %x, %lx \n", sizeof(a_short_array),
a_short_array[26], a_short_array[26], a_ulong );
//
// printf output is:
//
// 256, d687, ffffd687, 12d687
//
}

and ran it as so:

$ gcc -Wconversion -o hello hello.c
$ hello

getting output:

256, d687, ffffd687, 12d687

now, i have confirmed that a short is 16 bits and an unsigned long is
32 bits. why does not this line of code:
a_short_array[26] = a_ulong;
generate a warning when i have the -Wconversion or -Wall flags set on
the gcc invocation line?

there is clearly a loss of bits (or a changing of value).

here is what the manual says about it:
from http://gcc.gnu.org/onlinedocs/gcc/Wa...arning-Options
:

-Wconversion
Warn for implicit conversions that may alter a value. This
includes conversions between real and integer, like abs (x) when x is
double; conversions between signed and unsigned, like unsigned ui =
-1; and conversions to smaller types, like sqrtf (M_PI). Do not warn
for explicit casts like abs ((int) x) and ui = (unsigned) -1, or if
the value is not changed by the conversion like in abs (2.0). Warnings
about conversions between signed and unsigned integers can be disabled
by using -Wno-sign-conversion.

For C++, also warn for conversions between NULL and non-pointer
types; confusing overload resolution for user-defined conversions; and
conversions that will never use a type conversion operator:
conversions to void, the same type, a base class or a reference to
them. Warnings about conversions between signed and unsigned integers
are disabled by default in C++ unless -Wsign-conversion is explicitly
enabled.

is there some other compiler flag i need to hit? i don't get why this
doesn't generate a warning.
finally, please reply to both newsgroups as i don't hang around
comp.lang.c very much.

thank you,

r b-j

Jun 5 '07 #1
82 4516
robert bristow-johnson <rb*@audioimagination.comwrites:
[...]
now, i have confirmed that a short is 16 bits and an unsigned long is
32 bits.
How did you do that? Try this:

printf("size of short = %d, size of ulong = %d\n", sizeof(short), sizeof(unsigned long));

My suspicion is that they are both 32 bits (4 chars) on your machine
and that's why you're not getting a warning.

Robert, I NEVER use these data types anymore since I discovered
stdint.h defined in C99. I instead use explicit types like int16_t, uint32_t,
etc.
--
% Randy Yates % "Remember the good old 1980's, when
%% Fuquay-Varina, NC % things were so uncomplicated?"
%%% 919-577-9882 % 'Ticket To The Moon'
%%%% <ya***@ieee.org % *Time*, Electric Light Orchestra
http://home.earthlink.net/~yatescr
Jun 5 '07 #2
Randy Yates <ya***@ieee.orgwrites:
robert bristow-johnson <rb*@audioimagination.comwrites:
>[...]
now, i have confirmed that a short is 16 bits and an unsigned long is
32 bits.

How did you do that? Try this:

printf("size of short = %d, size of ulong = %d\n", sizeof(short), sizeof(unsigned long));
Doh! Sorry! I just reread your code and saw your statement verified there.

I have no freaking idea why the compiler doesn't burp. Not even with -Wall do I get
a warning on this conversion.
--
% Randy Yates % "She's sweet on Wagner-I think she'd die for Beethoven.
%% Fuquay-Varina, NC % She love the way Puccini lays down a tune, and
%%% 919-577-9882 % Verdi's always creepin' from her room."
%%%% <ya***@ieee.org % "Rockaria", *A New World Record*, ELO
http://home.earthlink.net/~yatescr
Jun 5 '07 #3
Randy Yates wrote:

(snip)
printf("size of short = %d, size of ulong = %d\n", sizeof(short), sizeof(unsigned long));
This makes the assumption that sizeof returns an int, when it
often returns something else. Maybe you should also test
sizeof(sizeof(int))==sizeof(int)

-- glen

Jun 5 '07 #4


glen herrmannsfeldt wrote:
Randy Yates wrote:

(snip)
>printf("size of short = %d, size of ulong = %d\n", sizeof(short),
sizeof(unsigned long));


This makes the assumption that sizeof returns an int, when it
often returns something else. Maybe you should also test
sizeof(sizeof(int))==sizeof(int)
This also makes the assumption that sizeof() returns size in bytes,
whereas sizeof returns the size in chars. Char may be bigger then one byte.

Vladimir Vassilevsky

DSP and Mixed Signal Design Consultant

http://www.abvolt.com
Jun 5 '07 #5
glen herrmannsfeldt <ga*@ugcs.caltech.eduwrites:
This makes the assumption that sizeof returns an int, when it
often returns something else.
sizeof's result is never an int, although it can be an unsigned
int.
--
Ben Pfaff
http://benpfaff.org
Jun 5 '07 #6
Vladimir Vassilevsky <an************@hotmail.comwrites:
glen herrmannsfeldt wrote:
>Randy Yates wrote:
(snip)
>>printf("size of short = %d, size of ulong = %d\n", sizeof(short),
sizeof(unsigned long));
This makes the assumption that sizeof returns an int, when it
often returns something else. Maybe you should also test
sizeof(sizeof(int))==sizeof(int)

This also makes the assumption that sizeof() returns size in bytes,
whereas sizeof returns the size in chars. Char may be bigger then one
byte.
No, a "byte" is by definition the size of a char. The term "byte" may
have other meanings outside the context of C, but sizeof(char) is 1 by
definition.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 5 '07 #7
glen herrmannsfeldt <ga*@ugcs.caltech.eduwrites:
Randy Yates wrote:
(snip)
>printf("size of short = %d, size of ulong = %d\n", sizeof(short), sizeof(unsigned long));

This makes the assumption that sizeof returns an int, when it
often returns something else. Maybe you should also test
sizeof(sizeof(int))==sizeof(int)
No, just convert it before printing it:

printf("size of short = %d, size of unsigned long = %d\n",
(int)sizeof(short), (int)sizeof(unsigned long));

Or use "lu" and unsigned long if the result of sizeof might exceed
INT_MAX.

Or use "%zu" if your implementation supports it.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 5 '07 #8
Vladimir Vassilevsky <an************@hotmail.comwrites:
This also makes the assumption that sizeof() returns size in bytes,
whereas sizeof returns the size in chars. Char may be bigger then one
byte.
This statement reflects some confusion about C definitions. In
C, a char is always one byte, in that sizeof(char) is always 1.
However, the size of a byte is implementation-defined: it may be
larger than one octet (though not smaller).
--
"It wouldn't be a new C standard if it didn't give a
new meaning to the word `static'."
--Peter Seebach on C99
Jun 5 '07 #9
On Jun 5, 11:39 pm, glen herrmannsfeldt <g...@ugcs.caltech.eduwrote:
Randy Yates wrote:

(snip)
printf("size of short = %d, size of ulong = %d\n", sizeof(short), sizeof(unsigned long));

This makes the assumption that sizeof returns an int, when it
often returns something else.
Perhaps it does, but on any realistic platform, why would this matter
if we're measuring the size of a short and a long?

--
Oli

Jun 5 '07 #10
In article <87************@blp.benpfaff.org>,
Ben Pfaff <bl*@cs.stanford.eduwrote:
>However, the size of a byte is implementation-defined: it may be
larger than one octet (though not smaller).
How big is an octet on ternary machines?

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 5 '07 #11
In article <11**********************@w5g2000hsg.googlegroups. com>,
Oli Charlesworth <ca***@olifilth.co.ukwrote:
printf("size of short = %d, size of ulong = %d\n", sizeof(short),
sizeof(unsigned long));
>This makes the assumption that sizeof returns an int, when it
often returns something else.
>Perhaps it does, but on any realistic platform, why would this matter
if we're measuring the size of a short and a long?
Because they're passed to a varargs function (printf), which will try
to access them as ints, so if they're in fact bigger than ints -
regardless of their value - things will go horribly wrong.

I don't think there is a simple reliable way to print a size_t in
general, since it could in principle be bigger than unsigned long
long, but in this case using (int)sizeof(short) would work.

-- Richard

--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 5 '07 #12
Vladimir Vassilevsky <an************@hotmail.comwrites:
glen herrmannsfeldt wrote:
>Randy Yates wrote:
(snip)
>>printf("size of short = %d, size of ulong = %d\n", sizeof(short),
sizeof(unsigned long));
This makes the assumption that sizeof returns an int, when it
often returns something else. Maybe you should also test
sizeof(sizeof(int))==sizeof(int)

This also makes the assumption that sizeof() returns size in bytes,
whereas sizeof returns the size in chars.
It is true that sizeof() returns the size in chars. It is also true
that in C, a char is, by definition, a byte.

However, even if you take "byte" to mean "8 bits," my statement was
not incorrect since Robert stated up front that the compiler was for
the i386, and that machine has 8-bit bytes.

I am aware of this "byte" definition since I ran into it on the TI TMS
C54x C compiler, where sizeof(int) = 1, even though an int is 16
bits. That machine's native datapath is 16 bits wide and incapable of
representing a type smaller than 16 bits.
--
% Randy Yates % "She has an IQ of 1001, she has a jumpsuit
%% Fuquay-Varina, NC % on, and she's also a telephone."
%%% 919-577-9882 %
%%%% <ya***@ieee.org % 'Yours Truly, 2095', *Time*, ELO
http://home.earthlink.net/~yatescr
Jun 5 '07 #13
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <11**********************@w5g2000hsg.googlegroups. com>,
Oli Charlesworth <ca***@olifilth.co.ukwrote:
>printf("size of short = %d, size of ulong = %d\n", sizeof(short),
sizeof(unsigned long));
>>This makes the assumption that sizeof returns an int, when it
often returns something else.
>>Perhaps it does, but on any realistic platform, why would this matter
if we're measuring the size of a short and a long?

Because they're passed to a varargs function (printf), which will try
to access them as ints, so if they're in fact bigger than ints -
regardless of their value - things will go horribly wrong.

I don't think there is a simple reliable way to print a size_t in
general, since it could in principle be bigger than unsigned long
long, but in this case using (int)sizeof(short) would work.
That brings up an interesting question: Doesn't this behavior depend
on the machine's endianness?

For example, consider the statement

printf("sizeof(int) = %d", sizeof(int));

and the case in which int is 16 bits and sizeof() returns 32 bits.

Is it true that a little-endian machine will print this correctly,
while a big-endian machine will not?
--
% Randy Yates % "She's sweet on Wagner-I think she'd die for Beethoven.
%% Fuquay-Varina, NC % She love the way Puccini lays down a tune, and
%%% 919-577-9882 % Verdi's always creepin' from her room."
%%%% <ya***@ieee.org % "Rockaria", *A New World Record*, ELO
http://home.earthlink.net/~yatescr
Jun 5 '07 #14
In article <m3************@ieee.org>, Randy Yates <ya***@ieee.orgwrote:
>For example, consider the statement

printf("sizeof(int) = %d", sizeof(int));

and the case in which int is 16 bits and sizeof() returns 32 bits.

Is it true that a little-endian machine will print this correctly,
while a big-endian machine will not?
For a common method of implementation, yes. If you printed two values
the second would print as zero.

But varargs implementations can be more complicated. They could use
separate arrays for arguments of different types, in which case they
would look in completely the wrong place for the argument.

They could also pass some arguments in registers, in which case it
might work regardless of endianness, or always fail.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 5 '07 #15
Richard Tobin said:
In article <87************@blp.benpfaff.org>,
Ben Pfaff <bl*@cs.stanford.eduwrote:
>>However, the size of a byte is implementation-defined: it may be
larger than one octet (though not smaller).

How big is an octet on ternary machines?
The requirement is to be able to represent at least 256 discrete values.
This can be accomplished in six trits, the relevant trit pattern being
100110.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Jun 5 '07 #16
Ben Pfaff wrote:
glen herrmannsfeldt <ga*@ugcs.caltech.eduwrites:
>>This makes the assumption that sizeof returns an int, when it
often returns something else.
sizeof's result is never an int, although it can be an unsigned
int.
So I should have said "always" instead of "often"?

-- glen

Jun 5 '07 #17
Vladimir Vassilevsky wrote:

(even more snip)
>>printf("size of short = %d, size of ulong = %d\n", sizeof(short),
sizeof(unsigned long));
>This makes the assumption that sizeof returns an int, when it
often returns something else. Maybe you should also test
sizeof(sizeof(int))==sizeof(int)
This also makes the assumption that sizeof() returns size in bytes,
whereas sizeof returns the size in chars. Char may be bigger then one byte.
I would say it is the reader of the output that makes that
assumption, not the program. If you write:

printf("sizeof(short) = %d, sizeof(unsigned long) = %d\n",
(int)sizeof(short), (int)sizeof(unsigned long));

then it is more obvious that it is returning the result
of sizeof.

-- glen

Jun 5 '07 #18
In article <58******************************@bt.com>,
Richard Heathfield <rj*@see.sig.invalidwrote:
>>>However, the size of a byte is implementation-defined: it may be
larger than one octet (though not smaller).
>How big is an octet on ternary machines?
>The requirement is to be able to represent at least 256 discrete values.
This can be accomplished in six trits, the relevant trit pattern being
100110.
What I was getting at was whether the definition of octet should be
taken to be 8 bits, or 8 whatsits (where whatsit = bit, trit, etc).
That is, can a six-trit word be considered to be smaller than an
octet, defeating the claim that a byte may not be smaller than octet?

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 5 '07 #19
Richard Tobin said:
In article <58******************************@bt.com>,
Richard Heathfield <rj*@see.sig.invalidwrote:
>>>>However, the size of a byte is implementation-defined: it may be
larger than one octet (though not smaller).
>>How big is an octet on ternary machines?
>>The requirement is to be able to represent at least 256 discrete
values. This can be accomplished in six trits, the relevant trit
pattern being 100110.

What I was getting at was whether the definition of octet should be
taken to be 8 bits, or 8 whatsits (where whatsit = bit, trit, etc).
Oh, I see.
That is, can a six-trit word be considered to be smaller than an
octet, defeating the claim that a byte may not be smaller than octet?
The C Standard makes no such claim. It only makes the claim that a byte
must be at least 8 bits wide. If we accept the possibility of a ternary
machine, the minimum number of trits that would do the trick is 6.

As far as I can tell, the word 'octet' doesn't appear in the Standard.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Jun 5 '07 #20
Randy Yates wrote:

(snip)
>>I don't think there is a simple reliable way to print a size_t in
general, since it could in principle be bigger than unsigned long
long, but in this case using (int)sizeof(short) would work.
As someone else mentioned, %zu has been added to solve the problem.
That brings up an interesting question: Doesn't this behavior depend
on the machine's endianness?
For example, consider the statement
printf("sizeof(int) = %d", sizeof(int));
and the case in which int is 16 bits and sizeof() returns 32 bits.
Is it true that a little-endian machine will print this correctly,
while a big-endian machine will not?
It is reasonably likely that it will, though it is still wrong.
Also, that will only work if you are just printing one of them.

I consider this a disadvantage of little-endian, as it tends to
hide bugs due to type mismatch until it is too late.
Consider also the mistake of printing a 16 bit value with a
format item expecting 32 bits. (or any function expecting
a 32 bit value.) It might work.

-- glen

Jun 5 '07 #21

grumble, grumble...

okay guys we can argue a little bit about the sizes of types, but
isn't it clear that when i run these lines of code:

a_ulong = 1234567;

a_short_array[26] = a_ulong;

printf("%d, %hx, %x, %lx \n", sizeof(a_short_array),
a_short_array[26], a_short_array[26], a_ulong );

and get this for output:

256, d687, ffffd687, 12d687

that the bits in the hex digits "12" went bye-bye in the assignment
statement? i just wanna know what flag to set (if any) that makes the
compiler tell me i might want to check the statement that could
potentially throw away those bits. i would think, from the
description that -Wconversion or -Wall should do it, but it doesn't
and i was wondering if the hardcore C or gcc geeks might know the
magic invocation to generate such a warning.

i'm no linux or gnu freak (really a neophyte), i just remember in my
old codewarrior days that there was a nice little check box i could
hit to see such warnings. killing such warnings is a useful
discipline to have to avoid some unforseen bugs that might also be
hard to find. it's sorta like enforcing strict type checking.

r b-j

Jun 5 '07 #22
In article <c9******************************@comcast.com>,
glen herrmannsfeldt <ga*@ugcs.caltech.eduwrote:
>As someone else mentioned, %zu has been added to solve the problem.
I'd never noticed that before. Nor had I noticed %jd and %td.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 5 '07 #23
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <87************@blp.benpfaff.org>,
Ben Pfaff <bl*@cs.stanford.eduwrote:
>>However, the size of a byte is implementation-defined: it may be
larger than one octet (though not smaller).

How big is an octet on ternary machines?
An "octet" is by definition 8 bits. If you don't have bits, you can't
have octets. Of course a ternary machine can emulate bits; the answer
to your question then depends on how the emulation is done.

If you use ternary machines, it might be reasonable to refer to a
collection of 8 trits as an "octet". That would conflict with normal
usage, but then so do ternary machines.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 6 '07 #24
robert bristow-johnson wrote:

(snip)
that the bits in the hex digits "12" went bye-bye in the assignment
statement? i just wanna know what flag to set (if any) that makes the
compiler tell me i might want to check the statement that could
potentially throw away those bits. i would think, from the
description that -Wconversion or -Wall should do it, but it doesn't
and i was wondering if the hardcore C or gcc geeks might know the
magic invocation to generate such a warning.
As far as I know, this is part of C.

Note that Java requires a cast for all narrowing conversions.
Maybe you should switch to Java instead.

-- glen

Jun 6 '07 #25
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <11**********************@w5g2000hsg.googlegroups. com>,
Oli Charlesworth <ca***@olifilth.co.ukwrote:
>printf("size of short = %d, size of ulong = %d\n", sizeof(short),
sizeof(unsigned long));
>>This makes the assumption that sizeof returns an int, when it
often returns something else.
>>Perhaps it does, but on any realistic platform, why would this matter
if we're measuring the size of a short and a long?

Because they're passed to a varargs function (printf), which will try
to access them as ints, so if they're in fact bigger than ints -
regardless of their value - things will go horribly wrong.
Correction: things *might* go horribly wrong. Worse, they might go
horribly right. (I say "horribly" because it could result in a
failure to detect the error until the code is ported to another
platform and fails at the most embarrassing posible moment.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 6 '07 #26
glen herrmannsfeldt <ga*@ugcs.caltech.eduwrites:
Ben Pfaff wrote:
>glen herrmannsfeldt <ga*@ugcs.caltech.eduwrites:
>>>This makes the assumption that sizeof returns an int, when it
often returns something else.
>sizeof's result is never an int, although it can be an unsigned
int.

So I should have said "always" instead of "often"?
Yes.
--
"Am I missing something?"
--Dan Pop
Jun 6 '07 #27
Randy Yates <ya***@ieee.orgwrites:
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
>In article <11**********************@w5g2000hsg.googlegroups. com>,
Oli Charlesworth <ca***@olifilth.co.ukwrote:
>>printf("size of short = %d, size of ulong = %d\n", sizeof(short),
sizeof(unsigned long));
>>>This makes the assumption that sizeof returns an int, when it
often returns something else.
>>>Perhaps it does, but on any realistic platform, why would this matter
if we're measuring the size of a short and a long?

Because they're passed to a varargs function (printf), which will try
to access them as ints, so if they're in fact bigger than ints -
regardless of their value - things will go horribly wrong.

I don't think there is a simple reliable way to print a size_t in
general, since it could in principle be bigger than unsigned long
long, but in this case using (int)sizeof(short) would work.

That brings up an interesting question: Doesn't this behavior depend
on the machine's endianness?

For example, consider the statement

printf("sizeof(int) = %d", sizeof(int));

and the case in which int is 16 bits and sizeof() returns 32 bits.

Is it true that a little-endian machine will print this correctly,
while a big-endian machine will not?
Maybe.

As far as standard C is concerned, it's undefined behavior. That
means that the standard says absolutely nothing about what will
happen. It might blow up, it might print correct or incorrect
results, and it might make demons fly out of your nose (not likely,
but if it does you can't complain that it violates the standard).

It's conceivable that 32-bit and 16-bit integers are passed as
arguments in different registers; attempting to read one when you were
promised the other might give you garbage, regardless of endianness.

The solution is quite simple: don't do that.

(I see that this discussion is cross-posted to comp.dsp and
comp.lang.c. That's probably not a good idea.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 6 '07 #28
Richard Heathfield wrote:
Richard Tobin said:
(snip)
>>That is, can a six-trit word be considered to be smaller than an
octet, defeating the claim that a byte may not be smaller than octet?
The C Standard makes no such claim. It only makes the claim that a byte
must be at least 8 bits wide. If we accept the possibility of a ternary
machine, the minimum number of trits that would do the trick is 6.
C sort of expects a binary representation. Unsigned addition is
module some power of two, and bitwise operators would be very slow
otherwise.

Fortran specifically allows any base greater than one. That would
be a better place to look for a ternary machine. (Fortran has bitwise
operations as intrinsic functions. It isn't so obvious what they would
do on a non-binary machine.)

-- glen

Jun 6 '07 #29
robert bristow-johnson said:
isn't it clear that when i run these lines of code:

a_ulong = 1234567;

a_short_array[26] = a_ulong;

printf("%d, %hx, %x, %lx \n", sizeof(a_short_array),
a_short_array[26], a_short_array[26], a_ulong );

and get this for output:

256, d687, ffffd687, 12d687

that the bits in the hex digits "12" went bye-bye in the assignment
statement?
Yes. the C Standard does not require implementations to produce a
diagnostic message in this circumstance. A conversion is supplied. Of
necessity, if the lvalue is less wide than the rvalue, any information
stored in those extra bits will be lost. Nevertheless, the conversion
is a useful one in situations where no information is lost, and to take
advantage of it does not constitute a syntax error or constraint
violation, so no diagnostic message is required.
i just wanna know what flag to set (if any) that makes the
compiler tell me i might want to check the statement that could
potentially throw away those bits.
Check in a newsgroup that deals with your implementation.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Jun 6 '07 #30
robert bristow-johnson wrote:
// $ gcc -Wconversion -o hello hello.c
When using gcc, I usually use -Wall -Wextra -Werror which turns
on all warnings and makes all warnings into errors.

Not sure if this will help in your case.

Erik
--
-----------------------------------------------------------------
Erik de Castro Lopo
-----------------------------------------------------------------
"There are only two things wrong with C++: The initial concept and
the implementation." -- Bertrand Meyer
Jun 6 '07 #31
glen herrmannsfeldt <ga*@ugcs.caltech.eduwrites:
Richard Heathfield wrote:
>Richard Tobin said:
(snip)
>>>That is, can a six-trit word be considered to be smaller than an
octet, defeating the claim that a byte may not be smaller than octet?
>The C Standard makes no such claim. It only makes the claim that a byte
must be at least 8 bits wide. If we accept the possibility of a ternary
machine, the minimum number of trits that would do the trick is 6.

C sort of expects a binary representation. Unsigned addition is
module some power of two, and bitwise operators would be very slow
otherwise.
C99 explicitly requires a binary representation. (Emulating binary on
a ternary machine would be valid.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 6 '07 #32
Richard Heathfield <rj*@see.sig.invalidwrites:
Richard Tobin said:
>In article <87************@blp.benpfaff.org>,
Ben Pfaff <bl*@cs.stanford.eduwrote:
>>>However, the size of a byte is implementation-defined: it may be
larger than one octet (though not smaller).

How big is an octet on ternary machines?

The requirement is to be able to represent at least 256 discrete values.
This can be accomplished in six trits, the relevant trit pattern being
100110.
Wouldn't a "ternary digit" be a "tit"??? "How many tits are in that byte?"...
--
% Randy Yates % "And all that I can do
%% Fuquay-Varina, NC % is say I'm sorry,
%%% 919-577-9882 % that's the way it goes..."
%%%% <ya***@ieee.org % Getting To The Point', *Balance of Power*, ELO
http://home.earthlink.net/~yatescr
Jun 6 '07 #33
robert bristow-johnson <rb*@audioimagination.comwrites:
[...]
i just wanna know what flag to set (if any) that makes the
compiler tell me i might want to check the statement that could
potentially throw away those bits. i would think, from the
description that -Wconversion or -Wall should do it, but it doesn't
and i was wondering if the hardcore C or gcc geeks might know the
magic invocation to generate such a warning.
[...]

Try gnu.gcc.help.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 6 '07 #34
Keith Thompson wrote:
Vladimir Vassilevsky <an************@hotmail.comwrites:
>glen herrmannsfeldt wrote:
>>Randy Yates wrote:
(snip)

printf("size of short = %d, size of ulong = %d\n", sizeof(short),
sizeof(unsigned long));
This makes the assumption that sizeof returns an int, when it
often returns something else. Maybe you should also test
sizeof(sizeof(int))==sizeof(int)
This also makes the assumption that sizeof() returns size in bytes,
whereas sizeof returns the size in chars. Char may be bigger then one
byte.

No, a "byte" is by definition the size of a char. The term "byte" may
have other meanings outside the context of C, but sizeof(char) is 1 by
definition.
Isn't a byte in C the larger of character, octet, or smallest
addressable storage element?

Jerry
--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Jun 6 '07 #35
Erik de Castro Lopo <er***@mega-nerd.comwrites:
robert bristow-johnson wrote:
>// $ gcc -Wconversion -o hello hello.c

When using gcc, I usually use -Wall -Wextra -Werror which turns
on all warnings and makes all warnings into errors.

Not sure if this will help in your case.
It doesn't on my machine.

[yates@localhost tmp]$ gcc -v
Using built-in specs.
Target: x86_64-unknown-linux-gnu
Configured with: ./configure
Thread model: posix
gcc version 4.1.2
--
% Randy Yates % "And all that I can do
%% Fuquay-Varina, NC % is say I'm sorry,
%%% 919-577-9882 % that's the way it goes..."
%%%% <ya***@ieee.org % Getting To The Point', *Balance of Power*, ELO
http://home.earthlink.net/~yatescr
Jun 6 '07 #36
Jerry Avins said:
Keith Thompson wrote:
<snip>
>No, a "byte" is by definition the size of a char. The term "byte"
may have other meanings outside the context of C, but sizeof(char) is
1 by definition.

Isn't a byte in C the larger of character, octet, or smallest
addressable storage element?
Well, that isn't how it's defined! But yes, your rule looks correct to
me. I think it's easier to think of it as 8+ bits wide.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Jun 6 '07 #37
In article <ln************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:
>Because they're passed to a varargs function (printf), which will try
to access them as ints, so if they're in fact bigger than ints -
regardless of their value - things will go horribly wrong.
>Correction: things *might* go horribly wrong. Worse, they might go
horribly right.
Going horribly right is just one of the ways it might go horribly wrong.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.
Jun 6 '07 #38
On Jun 5, 8:06 pm, Richard Heathfield <r...@see.sig.invalidwrote:
robert bristow-johnson said:
isn't it clear that when i run these lines of code:
a_ulong = 1234567;
a_short_array[26] = a_ulong;
printf("%d, %hx, %x, %lx \n", sizeof(a_short_array),
a_short_array[26], a_short_array[26], a_ulong );
and get this for output:
256, d687, ffffd687, 12d687
that the bits in the hex digits "12" went bye-bye in the assignment
statement?

Yes. the C Standard does not require implementations to produce a
diagnostic message in this circumstance. A conversion is supplied. Of
necessity, if the lvalue is less wide than the rvalue, any information
stored in those extra bits will be lost. Nevertheless, the conversion
is a useful one in situations where no information is lost, and to take
advantage of it does not constitute a syntax error or constraint
violation, so no diagnostic message is required.
it just seems to me that this conversion qualifies as one that changes
value. then, according to the gcc doc, it should generate a -
Wconversion warning. it's close to an assignment of one type to
another but less severe. for example, if sizeof(unsigned
short)<sizeof(long) we know that no value is changed in this
assignment:

unsigned short a_ushort;
long a_long;
a_ushort = 4321;
a_long = a_ushort;

so no warning should be generated, no matter what bits are in
a_ushort, there is no change of value. whereas (assuming
sizeof(short)<sizeof(unsigned long)) this:

short a_short;
unsigned long a_ulong;
a_short = -4321;
a_ulong = a_short;

should generate a warning because there are values in the range of the
type (short) that are not in the type (unsigned long). so even if the
number of bits in the word are increasing in the assignment, this
should generate a -Wcondition warning (maybe it does, but it should
also do it for the original example i brought).

so i can see this warning as being functionally different from one of
type checking or even if there is a bit reduction. i just wish it
worked right.
>
i just wanna know what flag to set (if any) that makes the
compiler tell me i might want to check the statement that could
potentially throw away those bits.

Check in a newsgroup that deals with your implementation.
yeah, i should look for a gnu or gcc newgroup. just dunno where.

r b-j

Jun 6 '07 #39
On Jun 5, 8:13 pm, Keith Thompson <k...@mib.orgwrote:
robert bristow-johnson <r...@audioimagination.comwrites:

[...] i just wanna know what flag to set (if any) that makes the
compiler tell me i might want to check the statement that could
potentially throw away those bits. i would think, from the
description that -Wconversion or -Wall should do it, but it doesn't
and i was wondering if the hardcore C or gcc geeks might know the
magic invocation to generate such a warning.

[...]

Try gnu.gcc.help.
thanks. didn't even know about the gnu hierarchy.

r b-j

Jun 6 '07 #40
Richard Heathfield wrote:
Jerry Avins said:
>Keith Thompson wrote:

<snip>
>>No, a "byte" is by definition the size of a char. The term "byte"
may have other meanings outside the context of C, but sizeof(char) is
1 by definition.
Isn't a byte in C the larger of character, octet, or smallest
addressable storage element?

Well, that isn't how it's defined! But yes, your rule looks correct to
me. I think it's easier to think of it as 8+ bits wide.
Do you have a better "rule" than mine to tell just how many that '+'
implies?

Jerry
--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Jun 6 '07 #41
On Tue, 05 Jun 2007 20:33:32 -0400, Jerry Avins <jy*@ieee.orgwrote
in comp.dsp:
Richard Heathfield wrote:
Jerry Avins said:
Keith Thompson wrote:
<snip>
>No, a "byte" is by definition the size of a char. The term "byte"
may have other meanings outside the context of C, but sizeof(char) is
1 by definition.
Isn't a byte in C the larger of character, octet, or smallest
addressable storage element?
Well, that isn't how it's defined! But yes, your rule looks correct to
me. I think it's easier to think of it as 8+ bits wide.

Do you have a better "rule" than mine to tell just how many that '+'
implies?
Very simple. Either you or your program should look at or include the
standard C header <limits.h>. There's a perfectly good macro named
CHAR_BIT defined there.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
Jun 6 '07 #42
Jerry Avins <jy*@ieee.orgwrites:
Richard Heathfield wrote:
>Jerry Avins said:
>>Keith Thompson wrote:
<snip>
>>>No, a "byte" is by definition the size of a char. The term "byte"
may have other meanings outside the context of C, but sizeof(char) is
1 by definition.
Isn't a byte in C the larger of character, octet, or smallest
addressable storage element?
Well, that isn't how it's defined! But yes, your rule looks correct
to me. I think it's easier to think of it as 8+ bits wide.

Do you have a better "rule" than mine to tell just how many that '+'
implies?
A byte is exactly the size of a character (an object of type char).

A byte is at least 8 bits (i.e., >= 1 octet).

A byte is an addressable unit of data storage (i.e., at least as big
as the smallest addressible storage element).

Systems for which the natural size of a character is less than 8 bits,
or where a character is not addressable, are shown no mercy; they must
adapt somehow in order to be conforming. For example, on Cray vector
systems, a character is 8 bits, but the smallest physically
addressable storage unit is 64 bits. The C compiler fakes 8-bit
addressability by storing the byte offset in the high-order bits of a
pointer. There's no hardware support for this; it's done in software
(i.e., by the machine code generated by the compiler). (CHAR_BIT
*could* have been set to 64 rather than 8, but that would have broken
interoperation with other systems.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 6 '07 #43
glen herrmannsfeldt wrote:
Randy Yates wrote:

(snip)
>printf("size of short = %d, size of ulong = %d\n", sizeof(short),
sizeof(unsigned long));

This makes the assumption that sizeof returns an int, when it
often returns something else. Maybe you should also test
^^^^^
always. An int is signed, a size_t is unsigned. sizeof(type) is a
size_t.
sizeof(sizeof(int))==sizeof(int)
Why?
Jun 6 '07 #44
Keith Thompson wrote:

(snip)
An "octet" is by definition 8 bits. If you don't have bits, you can't
have octets. Of course a ternary machine can emulate bits; the answer
to your question then depends on how the emulation is done.
If you use ternary machines, it might be reasonable to refer to a
collection of 8 trits as an "octet". That would conflict with normal
usage, but then so do ternary machines.
Ternary machines are rare, but machines that can do decimal
arithmetic aren't so rare. Even x86 can do it, though not all
that easily.

Soon there will be machines that can do floating point decimal
arithmetic, as far as I know C allows that.

-- glen

Jun 6 '07 #45
robert bristow-johnson <rb*@audioimagination.comwrites:
On Jun 5, 8:06 pm, Richard Heathfield <r...@see.sig.invalidwrote:
>robert bristow-johnson said:
isn't it clear that when i run these lines of code:
a_ulong = 1234567;
a_short_array[26] = a_ulong;
printf("%d, %hx, %x, %lx \n", sizeof(a_short_array),
a_short_array[26], a_short_array[26], a_ulong );
and get this for output:
256, d687, ffffd687, 12d687
that the bits in the hex digits "12" went bye-bye in the assignment
statement?

Yes. the C Standard does not require implementations to produce a
diagnostic message in this circumstance. A conversion is supplied. Of
necessity, if the lvalue is less wide than the rvalue, any information
stored in those extra bits will be lost. Nevertheless, the conversion
is a useful one in situations where no information is lost, and to take
advantage of it does not constitute a syntax error or constraint
violation, so no diagnostic message is required.

it just seems to me that this conversion qualifies as one that changes
value. then, according to the gcc doc, it should generate a -
Wconversion warning.
From the manual:

-Wconversion Warn if a prototype causes a type conversion that is
different from what would happen to the same argument in the absence
of a prototype. This includes conversions of fixed point to floating
and vice versa, and conversions changing the width or signedness of
a fixed point argument except when the same as the default
promotion.

Also, warn if a negative integer constant expression is implicitly
converted to an unsigned type. For example, warn about the
assignment x = -1 if x is unsigned. But do not warn about explicit
casts like (unsigned) -1.

What do they mean by "prototype"?

No matter what the documentation says or the standards say, I also
find this situation extremely frustrating and counter-productive.

It seems like the compiler emits warnings (or errors) all the time on
type conversions of little consequence, and yet when it comes to
something that causes a real loss of information, it remains silent.

This behaviour ought to be changed.
--
% Randy Yates % "And all that I can do
%% Fuquay-Varina, NC % is say I'm sorry,
%%% 919-577-9882 % that's the way it goes..."
%%%% <ya***@ieee.org % Getting To The Point', *Balance of Power*, ELO
http://home.earthlink.net/~yatescr
Jun 6 '07 #46
glen herrmannsfeldt wrote:
Ben Pfaff wrote:
>glen herrmannsfeldt <ga*@ugcs.caltech.eduwrites:
>>This makes the assumption that sizeof returns an int, when it
often returns something else.
>sizeof's result is never an int, although it can be an unsigned
int.

So I should have said "always" instead of "often"?
It returns a size_t, which is always defined as an unsigned type.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 6 '07 #47
Richard Tobin wrote:
>
.... snip ...
>
How big is an octet on ternary machines?
5 ternets + "something to express 256/243 rounded up"

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 6 '07 #48
Randy Yates wrote:
>
.... snip ...
>
For example, consider the statement

printf("sizeof(int) = %d", sizeof(int));

and the case in which int is 16 bits and sizeof() returns 32 bits.

Is it true that a little-endian machine will print this correctly,
while a big-endian machine will not?
The cure is simple. Cast the value passed to printf to int.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 6 '07 #49
Randy Yates wrote:
>
.... snip ...
>
I am aware of this "byte" definition since I ran into it on the TI
TMS C54x C compiler, where sizeof(int) = 1, even though an int is
16 bits. That machine's native datapath is 16 bits wide and
incapable of representing a type smaller than 16 bits.
Nit. It's capable. It just doesn't limit the numbers as much. :-)

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 6 '07 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

12
by: robert bristow-johnson | last post by:
presently using linux gcc: $ gcc -v Using built-in specs. Target: i386-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man -- infodir=/usr/share/info...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.