By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
449,042 Members | 1,047 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 449,042 IT Pros & Developers. It's quick & easy.

how can i generate warnings for implicit casts that lose bits?

P: n/a
here is a post i put out (using Google Groups) that got dropped by
google:

i am using gcc as so:
$ gcc -v
Using built-in specs.
Target: i386-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --
infodir=/usr/share/info --enable-shared --enable-threads=posix --
enable-checking=release --with-system-zlib --enable-__cxa_atexit --
disable-libunwind-exceptions --enable-libgcj-multifile --enable-
languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --
disable-dssi --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
--with-cpu=generic --host=i386-redhat-linux
Thread model: posix
gcc version 4.1.1 20060525 (Red Hat 4.1.1-1)

and have compiled a simple test program (FILE: hello.c):

//
// $ gcc -Wconversion -o hello hello.c
// $ hello
//

#include <stdio.h>
main()
{
unsigned long a_ulong = 0; // 32 bit
short a_short_array[128]; // 16 bit each

a_ulong = 1234567;

a_short_array[26] = a_ulong;

printf("%d, %hx, %x, %lx \n", sizeof(a_short_array),
a_short_array[26], a_short_array[26], a_ulong );
//
// printf output is:
//
// 256, d687, ffffd687, 12d687
//
}

and ran it as so:

$ gcc -Wconversion -o hello hello.c
$ hello

getting output:

256, d687, ffffd687, 12d687

now, i have confirmed that a short is 16 bits and an unsigned long is
32 bits. why does not this line of code:
a_short_array[26] = a_ulong;
generate a warning when i have the -Wconversion or -Wall flags set on
the gcc invocation line?

there is clearly a loss of bits (or a changing of value).

here is what the manual says about it:
from http://gcc.gnu.org/onlinedocs/gcc/Wa...arning-Options
:

-Wconversion
Warn for implicit conversions that may alter a value. This
includes conversions between real and integer, like abs (x) when x is
double; conversions between signed and unsigned, like unsigned ui =
-1; and conversions to smaller types, like sqrtf (M_PI). Do not warn
for explicit casts like abs ((int) x) and ui = (unsigned) -1, or if
the value is not changed by the conversion like in abs (2.0). Warnings
about conversions between signed and unsigned integers can be disabled
by using -Wno-sign-conversion.

For C++, also warn for conversions between NULL and non-pointer
types; confusing overload resolution for user-defined conversions; and
conversions that will never use a type conversion operator:
conversions to void, the same type, a base class or a reference to
them. Warnings about conversions between signed and unsigned integers
are disabled by default in C++ unless -Wsign-conversion is explicitly
enabled.

is there some other compiler flag i need to hit? i don't get why this
doesn't generate a warning.
finally, please reply to both newsgroups as i don't hang around
comp.lang.c very much.

thank you,

r b-j

Jun 5 '07
Share this Question
Share on Google+
82 Replies


P: n/a
robert bristow-johnson wrote:
>
grumble, grumble...
.... snip ...
>
i'm no linux or gnu freak (really a neophyte), i just remember in
my old codewarrior days that there was a nice little check box i
could hit to see such warnings. killing such warnings is a useful
discipline to have to avoid some unforseen bugs that might also be
hard to find. it's sorta like enforcing strict type checking.
Try "gcc -W -Wall -ansi -pedantic"

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net
--
Posted via a free Usenet account from http://www.teranews.com

Jun 6 '07 #51

P: n/a
Vladimir Vassilevsky wrote:
glen herrmannsfeldt wrote:
.... snip ...
>>
This makes the assumption that sizeof returns an int, when it
often returns something else. Maybe you should also test
sizeof(sizeof(int))==sizeof(int)

This also makes the assumption that sizeof() returns size in
bytes, whereas sizeof returns the size in chars. Char may be
bigger then one byte.
No it can't. Read the standard. Chars may be bigger than 8 bits,
but the size of a char is the same as the size of a byte. C is
peculiar that way.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 6 '07 #52

P: n/a
Jerry Avins wrote:
(snip)
Isn't a byte in C the larger of character, octet, or smallest
addressable storage element?
There are complications on some word addressable machines.

For the PDP-10 it is suggested that CHAR_BIT be either 9 or 18.
While its addressable unit is the 36 bit word, there are
instructions to directly address halfwords, and it isn't so hard
to address 9 bit bytes.

There are stories of C compilers for the PDP-10, though they
are not easy to find.

-- glen

Jun 6 '07 #53

P: n/a
CBFalconer wrote:
Vladimir Vassilevsky wrote:
(snip)
>>This also makes the assumption that sizeof() returns size in
bytes, whereas sizeof returns the size in chars. Char may be
bigger then one byte.
No it can't. Read the standard. Chars may be bigger than 8 bits,
but the size of a char is the same as the size of a byte. C is
peculiar that way.
It depends on context.

If one wrote a C program to print out the available
disk space on a system, the user might expect it to be in
eight bit bytes, even when CHAR_BIT was not 8.

But yes, sizeof (which is not a function) units are
sizeof(char), and are called bytes independent of the
actual size.

-- glen

Jun 6 '07 #54

P: n/a
On Tue, 05 Jun 2007 15:05:00 -0700, Ben Pfaff <bl*@cs.stanford.edu>
wrote:
>Vladimir Vassilevsky <an************@hotmail.comwrites:
>This also makes the assumption that sizeof() returns size in bytes,
whereas sizeof returns the size in chars. Char may be bigger then one
byte.

This statement reflects some confusion about C definitions. In
C, a char is always one byte, in that sizeof(char) is always 1.
However, the size of a byte is implementation-defined: it may be
larger than one octet (though not smaller).
The C standard screwed up when it chose to use the term "byte" to
refer to what is really a storage unit. Everyone who owns a hard drive
knows that a byte is 8 bits. So do most programmers, regardless of
their programming language choice.

The Ada standard got it right, because it chose to use the term
"storage unit" to refer to what the C standard refers to as a "byte".
Ada83 pre-dates C90 by enough years--go figure.

--
jay
Jun 6 '07 #55

P: n/a
jaysome <ja*****@hotmail.comwrites:
On Tue, 05 Jun 2007 15:05:00 -0700, Ben Pfaff <bl*@cs.stanford.edu>
wrote:
>>Vladimir Vassilevsky <an************@hotmail.comwrites:
>>This also makes the assumption that sizeof() returns size in bytes,
whereas sizeof returns the size in chars. Char may be bigger then one
byte.

This statement reflects some confusion about C definitions. In
C, a char is always one byte, in that sizeof(char) is always 1.
However, the size of a byte is implementation-defined: it may be
larger than one octet (though not smaller).

The C standard screwed up when it chose to use the term "byte" to
refer to what is really a storage unit. Everyone who owns a hard drive
knows that a byte is 8 bits. So do most programmers, regardless of
their programming language choice.
The term "byte" was in widespread use for a number of years before the
meanings other than 8 bits became rare.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 6 '07 #56

P: n/a
jaysome wrote:
>
.... snip ...
>
The C standard screwed up when it chose to use the term "byte" to
refer to what is really a storage unit. Everyone who owns a hard
drive knows that a byte is 8 bits. So do most programmers,
regardless of their programming language choice.

The Ada standard got it right, because it chose to use the term
"storage unit" to refer to what the C standard refers to as a
"byte". Ada83 pre-dates C90 by enough years--go figure.
C is screwed up in many ways, however people are used to it. You
can't change it, so adapt.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 6 '07 #57

P: n/a
robert bristow-johnson wrote:
here is a post i put out (using Google Groups) that got dropped by
google:

i am using gcc as so:
$ gcc -v
Using built-in specs.
Target: i386-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --
infodir=/usr/share/info --enable-shared --enable-threads=posix --
enable-checking=release --with-system-zlib --enable-__cxa_atexit --
disable-libunwind-exceptions --enable-libgcj-multifile --enable-
languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --
disable-dssi --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
--with-cpu=generic --host=i386-redhat-linux
Thread model: posix
gcc version 4.1.1 20060525 (Red Hat 4.1.1-1)

and have compiled a simple test program (FILE: hello.c):

//
// $ gcc -Wconversion -o hello hello.c
// $ hello
//

#include <stdio.h>
main()
{
unsigned long a_ulong = 0; // 32 bit
short a_short_array[128]; // 16 bit each

a_ulong = 1234567;

a_short_array[26] = a_ulong;

printf("%d, %hx, %x, %lx \n", sizeof(a_short_array),
a_short_array[26], a_short_array[26], a_ulong );
//
// printf output is:
//
// 256, d687, ffffd687, 12d687
//
}

and ran it as so:

$ gcc -Wconversion -o hello hello.c
$ hello

getting output:

256, d687, ffffd687, 12d687

now, i have confirmed that a short is 16 bits and an unsigned long is
32 bits. why does not this line of code:
a_short_array[26] = a_ulong;
generate a warning when i have the -Wconversion or -Wall flags set on
the gcc invocation line?

there is clearly a loss of bits (or a changing of value).

here is what the manual says about it:
from http://gcc.gnu.org/onlinedocs/gcc/Wa...arning-Options
:

-Wconversion
Warn for implicit conversions that may alter a value. This
includes conversions between real and integer, like abs (x) when x is
double; conversions between signed and unsigned, like unsigned ui =
-1; and conversions to smaller types, like sqrtf (M_PI). Do not warn
for explicit casts like abs ((int) x) and ui = (unsigned) -1, or if
the value is not changed by the conversion like in abs (2.0). Warnings
about conversions between signed and unsigned integers can be disabled
by using -Wno-sign-conversion.

For C++, also warn for conversions between NULL and non-pointer
types; confusing overload resolution for user-defined conversions; and
conversions that will never use a type conversion operator:
conversions to void, the same type, a base class or a reference to
them. Warnings about conversions between signed and unsigned integers
are disabled by default in C++ unless -Wsign-conversion is explicitly
enabled.

is there some other compiler flag i need to hit? i don't get why this
doesn't generate a warning.
finally, please reply to both newsgroups as i don't hang around
comp.lang.c very much.

thank you,

r b-j
Do not know about gcc, but lcc-win32 produces:
Warning twarn.c: 14 Assignment of unsigned long to short. Possible loss
of precision.

You have to use a higher warning level:
lcc -A twarn.c

In general, lcc-win32 warns when sizes differ in an assignment, the
recipient being smaller than the source.

Jun 6 '07 #58

P: n/a
On Wed, 06 Jun 2007 03:26:00 -0400, CBFalconer <cb********@yahoo.com>
wrote:
>jaysome wrote:
>>
... snip ...
>>
The C standard screwed up when it chose to use the term "byte" to
refer to what is really a storage unit. Everyone who owns a hard
drive knows that a byte is 8 bits. So do most programmers,
regardless of their programming language choice.

The Ada standard got it right, because it chose to use the term
"storage unit" to refer to what the C standard refers to as a
"byte". Ada83 pre-dates C90 by enough years--go figure.

C is screwed up in many ways,
Right. But overall, it's not that screwed up.
however people are used to it.
Not all people.
>You can't change it
Right.
so adapt.
I have.

--
jay
Jun 6 '07 #59

P: n/a
Richard Tobin wrote, On 05/06/07 23:14:
In article <87************@blp.benpfaff.org>,
Ben Pfaff <bl*@cs.stanford.eduwrote:
>However, the size of a byte is implementation-defined: it may be
larger than one octet (though not smaller).

How big is an octet on ternary machines?
Irrelevant. Integer types (including char) are required to use a pure
binary representation :-)
--
Flash Gordon
Jun 6 '07 #60

P: n/a
robert bristow-johnson wrote:
here is a post i put out (using Google Groups) that got dropped by
google:
I'm going to ignore your real problem [1] in favour of criticising your
subject line:

... implicit casts ...

C doesn't have implicit casts. Casts are explicit syntax, `(Type) Expr`.
It has implicit /conversions/.

I'll have another latte now.

[1] Which I see has generated much -- hopefully helpful -- response.

--
"It was the first really clever thing the King had said that day."
/Alice in Wonderland/

Hewlett-Packard Limited registered no:
registered office: Cain Road, Bracknell, Berks RG12 1HN 690597 England

Jun 6 '07 #61

P: n/a
jacob navia wrote:
robert bristow-johnson wrote:
>>
is there some other compiler flag i need to hit? i don't get why this
doesn't generate a warning.
Do not know about gcc, but lcc-win32 produces:
Did you have to quote the entire message?

--
Ian Collins.
Jun 6 '07 #62

P: n/a
Ian Collins wrote:
jacob navia wrote:
>robert bristow-johnson wrote:
>>>
is there some other compiler flag i need to hit? i don't get why this
doesn't generate a warning.
Do not know about gcc, but lcc-win32 produces:

Did you have to quote the entire message?
I don't think he cares about normal Usenet protocol. He certainly
ignores topicality here.

At any rate, there is a large difference between:

shortthing = bigthing;
and
shortthing = (shorttype)bigthing;

in that the second has already performed the conversion, and
nothing is lost thereafter.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 6 '07 #63

P: n/a
robert bristow-johnson <r...@audioimagination.comwrites:
it just seems to me that this conversion qualifies as one that changes
value. then, according to the gcc doc, it should generate a -
Wconversion warning. it's close to an assignment of one type to
another but less severe. for example, if sizeof(unsigned
short)<sizeof(long) we know that no value is changed in this
assignment:

unsigned short a_ushort;
long a_long;
a_ushort = 4321;
a_long = a_ushort;

so no warning should be generated, no matter what bits are in
a_ushort, there is no change of value. whereas (assuming
sizeof(short)<sizeof(unsigned long)) this:

short a_short;
unsigned long a_ulong;
a_short = -4321;
a_ulong = a_short;

should generate a warning because there are values in the range of the
type (short) that are not in the type (unsigned long). so even if the
number of bits in the word are increasing in the assignment, this
should generate a -Wcondition warning ...
i meant to say "generate a -Wconversion warning"
On Jun 5, 11:15 pm, Randy Yates <y...@ieee.orgwrote:>
From the manual:

-Wconversion Warn if a prototype causes a type conversion that is
different from what would happen to the same argument in the absence
of a prototype. This includes conversions of fixed point to floating
and vice versa, and conversions changing the width or signedness of
a fixed point argument except when the same as the default
promotion.

Also, warn if a negative integer constant expression is implicitly
converted to an unsigned type. For example, warn about the
assignment x = -1 if x is unsigned. But do not warn about explicit
casts like (unsigned) -1.

What do they mean by "prototype"?
i function prototype (i think that's what they mean) is when you put
at the top of a C file (or in a header file) the declaration such as:

double my_math_function(double argument1, long argument2);

(not the semicolen at the end). it can be abbreviated as so:

double my_math_function(double, long);

what i like to do is just copy the top line of the function when i
write it, paste it in the header file, tack on a semi-colen to the end
of it and then feel good and smug that i was obeying good coding
practices in C. in CodeWarrior, there is another check box that says
"Require prototypes" and an error will be generated if you define or
call a function without prototyping it. it's just another little type-
checking discipline that can save your ass at a later date.

i remember once, programming a 68K Mac without requiring prototypes,
and i made a simple call to a standard trancendental function (like
sin() or exp()) but i forgot to #include <math.h>. so the calling
function assumed (by default, since there was no prototype) that sin()
returned int which was 16 or 32 bits, but it really returned double or
extended (64 or 80 bits) and placed that return value on the stack
above the PC. so even though it was automatically linked to the math
lib when i built the app, what happened was that the calling program
(my code) did not reserve sufficient space on the stack for the return
value and the called program plopped a big 64 bit or bigger word than
the space that was made and my machine crashed. it was a stupid and
hard bug to fix because i couldn't understand why calling a standard
math function would crash the machine. i actually single-stepped
through this before realizing that i forgot to #include the header
file with the correct prototypes which was the source of the problem.
ever since then, i've become a believer in building projects with
prototypes required (except maybe if the function is defined in the
same C file it is called, and called *only* in that file, and defined
*before* it is called).
No matter what the documentation says or the standards say, I also
find this situation extremely frustrating and counter-productive.

It seems like the compiler emits warnings (or errors) all the time on
type conversions of little consequence, and yet when it comes to
something that causes a real loss of information, it remains silent.

This behaviour ought to be changed.
i agree. it seems to me to be standard that an *implicit* conversion
that potentially changes value (whether or not there is a bit
reduction) should, at least as an option, be flagged. (explicit casts
need not flag a warning, the compiler can assume you knew what you
were doing.) when you are building a big project with a couple
hundred files with interconnected spagetti code and some value got
clobbered (more precisely "clipped" or masked) because one guy thought
we were dealing with shorts and another thought longs (or signed longs
vs. unsigned longs, whatever), we should be informed by the compiler
when such conversions are made, whether in a simple assignment or in
passing a value to a function (that was expecting a slightly different
type).

i'm afraid this is like that stupid MATLAB/Octave indexing thing (all
arrays must start with index 1). it's something that obviously should
be fixed, but those with the where-with-all to fix it will deny that
it's a problem to start with (i guess that's one way to fix a problem
- deny the existence of it).

r b-j

Jun 6 '07 #64

P: n/a
CBFalconer wrote:
Ian Collins wrote:
>jacob navia wrote:
>>robert bristow-johnson wrote:
is there some other compiler flag i need to hit? i don't get why this
doesn't generate a warning.

Do not know about gcc, but lcc-win32 produces:
Did you have to quote the entire message?

I don't think he cares about normal Usenet protocol. He certainly
ignores topicality here.
Mmm This thread is about a gcc warning. Obviously gcc is
on topic, but lcc-win32 is not.

Why?

Because linux is cool and windows is not apparently. Or because
anything I say is off topic.

At any rate, there is a large difference between:

shortthing = bigthing;
and
shortthing = (shorttype)bigthing;

in that the second has already performed the conversion, and
nothing is lost thereafter.
I know that.
Jun 6 '07 #65

P: n/a
jacob navia said:
CBFalconer wrote:
>Ian Collins wrote:
>>jacob navia wrote:
robert bristow-johnson wrote:
is there some other compiler flag i need to hit? i don't get why
this doesn't generate a warning.
>
Do not know about gcc, but lcc-win32 produces:
Did you have to quote the entire message?

I don't think he cares about normal Usenet protocol. He certainly
ignores topicality here.

Mmm This thread is about a gcc warning. Obviously gcc is
on topic, but lcc-win32 is not.
No, gcc is off-topic here, just like lcc-win32 is off-topic here. The
proper course for respondents would have been to examine the source the
OP provided, to see whether the issue could somehow be resolved using
standard C. If not, they should have referred the OP to a gcc-specific
group.

<snip>

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at the above domain, - www.
Jun 6 '07 #66

P: n/a
CBFalconer <cb********@yahoo.comwrites:
[...]
At any rate, there is a large difference between:

shortthing = bigthing;
and
shortthing = (shorttype)bigthing;

in that the second has already performed the conversion, and
nothing is lost thereafter.
Assuming that shortthing is of type shorttype, there's no semantic
difference. The same conversion is performed in both cases; it's just
implicit in the first, and explicit in the second. (And in the second
case, it might be promoted after the cast and then narrowed for the
assignment, but I don't think that can have any visible effect, and a
compiler is likely to optimize it out.)

A compiler might reasonably warn in one case but not in the other, but
there's no requirement for it to do so.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 6 '07 #67

P: n/a
Keith Thompson wrote:
CBFalconer <cb********@yahoo.comwrites:
[...]
>At any rate, there is a large difference between:

shortthing = bigthing;
and
shortthing = (shorttype)bigthing;

in that the second has already performed the conversion, and
nothing is lost thereafter.

Assuming that shortthing is of type shorttype, there's no semantic
difference. The same conversion is performed in both cases; it's just
implicit in the first, and explicit in the second. (And in the second
case, it might be promoted after the cast and then narrowed for the
assignment, but I don't think that can have any visible effect, and a
compiler is likely to optimize it out.)

A compiler might reasonably warn in one case but not in the other, but
there's no requirement for it to do so.
lcc-win32 does NOT warn when the programmer explicitely casts
the assignment since it assumes that the programmer knows
about that particular data representation and it supposes
that the cast is safe.

When there is an implict conversion however, it could be due
to an oversight or an error, and in that case a warning is
useful.

Jun 6 '07 #68

P: n/a
On Jun 6, 2:28 pm, Richard Heathfield <r...@see.sig.invalidwrote:
jacob navia said:
CBFalconer wrote:
Ian Collins wrote:
jacob navia wrote:
robert bristow-johnson wrote:
is there some other compiler flag i need to hit? i don't get why
this doesn't generate a warning.
>>Do not know about gcc, but lcc-win32 produces:
Did you have to quote the entire message?
I don't think he cares about normal Usenet protocol. He certainly
ignores topicality here.
Mmm This thread is about a gcc warning. Obviously gcc is
on topic, but lcc-win32 is not.

No, gcc is off-topic here, just like lcc-win32 is off-topic here. The
proper course for respondents would have been to examine the source the
OP provided, to see whether the issue could somehow be resolved using
standard C. If not, they should have referred the OP to a gcc-specific
group.
as the OP, i didn't think this was so off-topic for comp.lang.c (i
cross-posted to comp.dsp because there is where i loiter and i know
that some of those guys think about nasty details like this). it
wasn't until someone pointed me to gnu.gcc.something that i had any
idea of the "proper" newsgroup to plop this onto.

i think jacob was reasonably on-topic as can be expected. (at least
you should see how conversations drift at comp.dsp. be careful there
because i have been known to rant if the provocation is sufficient.
and i'm not the only one.)

i gotta find that gnu.gcc.whatever group and post the question there.

r b-j
r b-j

Jun 6 '07 #69

P: n/a
On Jun 6, 6:11 pm, Erik de Castro Lopo <e...@mega-nerd.comwrote:
robert bristow-johnson wrote:
i still haven't heard the magic invocation i make to the gcc compiler
that will warn me when an implicit conversion can potentially change a
value. the manual seems to say that -Wconversion should do it, but i
know that it does not, at least in my reasonably current
implementation of gcc on linux.

I haven't tested it, but I think -Wconversion will generate the
warnings you require on things like:

long a ;
int b = 3 ;

a = b ;
why would it do that? there is no loss of information in that
assignment. did you mean b=a;? (then it would have to be established
that sizeof(int)<sizeof(long) or even that would not be a potentially
bad assignment.)
The problem with printf is that is uses <stdarg.hvariable
parameter passing and hence can't do the same checks.
the conversion that happens when a function is called is another (but
related) issue, and you're right, printf() has no way to know itself
what the size of the args are (except that we inform it with all of
the %d or %hd or %ld) and and my little test program in the original
post demonstrated that with the apparent sign extension done with the
short args. the signed short (a_short_array[26]) was sign extended to
ffff-something when placed on the stack and passed to printf(), but
with the %hx field, only the bottom 16 bits were shown. with the %x
field, it shown the 16 bits of the argument in addition to the 16 bits
of sign extension. either way, when the 32-bit long was assigned to a
16-bit short, this should have generated a warning when -Wconversion
was set, in my opinion. and it didn't.

r b-j

Jun 7 '07 #70

P: n/a
Keith Thompson wrote:
CBFalconer <cb********@yahoo.comwrites:
[...]
>At any rate, there is a large difference between:

shortthing = bigthing;
and
shortthing = (shorttype)bigthing;

in that the second has already performed the conversion, and
nothing is lost thereafter.

Assuming that shortthing is of type shorttype, there's no semantic
difference. The same conversion is performed in both cases; it's
just implicit in the first, and explicit in the second. (And in
the second case, it might be promoted after the cast and then
narrowed for the assignment, but I don't think that can have any
visible effect, and a compiler is likely to optimize it out.)

A compiler might reasonably warn in one case but not in the other,
but there's no requirement for it to do so.
There is no possible reason for a warning in the second. Assigning
a shorttype to shortthing is a completely normal action. There
never is an imperative warning.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 7 '07 #71

P: n/a
jacob navia wrote:
CBFalconer wrote:
>Ian Collins wrote:
>>jacob navia wrote:
robert bristow-johnson wrote:

is there some other compiler flag i need to hit? i don't get
why this doesn't generate a warning.
>
Do not know about gcc, but lcc-win32 produces:

Did you have to quote the entire message?

I don't think he cares about normal Usenet protocol. He certainly
ignores topicality here.

Mmm This thread is about a gcc warning. Obviously gcc is
on topic, but lcc-win32 is not.
I was referring to the unnecessary inclusion. F'ups set.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 7 '07 #72

P: n/a
CBFalconer <cb********@yahoo.comwrites:
Keith Thompson wrote:
>CBFalconer <cb********@yahoo.comwrites:
[...]
>>At any rate, there is a large difference between:

shortthing = bigthing;
and
shortthing = (shorttype)bigthing;

in that the second has already performed the conversion, and
nothing is lost thereafter.

Assuming that shortthing is of type shorttype, there's no semantic
difference. The same conversion is performed in both cases; it's
just implicit in the first, and explicit in the second. (And in
the second case, it might be promoted after the cast and then
narrowed for the assignment, but I don't think that can have any
visible effect, and a compiler is likely to optimize it out.)

A compiler might reasonably warn in one case but not in the other,
but there's no requirement for it to do so.

There is no possible reason for a warning in the second. Assigning
a shorttype to shortthing is a completely normal action. There
never is an imperative warning.
There is a *possible* reason to warn about the cast (not about the
assignment). Converting bigthing to shorttype can lose information.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Jun 7 '07 #73

P: n/a
Keith Thompson wrote:
CBFalconer <cb********@yahoo.comwrites:
>Keith Thompson wrote:
>>CBFalconer <cb********@yahoo.comwrites:
[...]
At any rate, there is a large difference between:

shortthing = bigthing;
and
shortthing = (shorttype)bigthing;

in that the second has already performed the conversion, and
nothing is lost thereafter.

Assuming that shortthing is of type shorttype, there's no semantic
difference. The same conversion is performed in both cases; it's
just implicit in the first, and explicit in the second. (And in
the second case, it might be promoted after the cast and then
narrowed for the assignment, but I don't think that can have any
visible effect, and a compiler is likely to optimize it out.)

A compiler might reasonably warn in one case but not in the other,
but there's no requirement for it to do so.

There is no possible reason for a warning in the second. Assigning
a shorttype to shortthing is a completely normal action. There
never is an imperative warning.

There is a *possible* reason to warn about the cast (not about the
assignment). Converting bigthing to shorttype can lose information.
Agreed, which is what I was trying to point out.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
<http://kadaitcha.cx/vista/dogsbreakfast/index.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jun 7 '07 #74

P: n/a
On Jun 7, 8:51 am, CBFalconer <cbfalco...@yahoo.comwrote:
Keith Thompson wrote:
....
There is a *possible* reason to warn about the cast (not about the
assignment). Converting bigthing to shorttype can lose information.

Agreed, which is what I was trying to point out.
in my opinion, when a programmer uses an explicit cast, the compiler
should be able to assume the guy knows what he is doing (and no
warning necessary). it is the implicit conversions that happen when
one type is assigned to another type or passed as an argument that was
meant to be (and declared as) another type, that might need warnigs.
when we do that without an explicit cast (destination_type) operator,
if and only if there is a possible change of value even if the word
size increased, there should be a warning, or at least the option of
that.

this code:

unsigned long an_unsigned_long;
short a_short;
...
an_unsigned_long = a_short;

should generate such a warning, even if the bits get bigger.

r b-j
Jun 7 '07 #75

P: n/a
robert bristow-johnson wrote:
(snip)
in my opinion, when a programmer uses an explicit cast, the compiler
should be able to assume the guy knows what he is doing (and no
warning necessary). it is the implicit conversions that happen when
one type is assigned to another type or passed as an argument that was
meant to be (and declared as) another type, that might need warnigs.
when we do that without an explicit cast (destination_type) operator,
if and only if there is a possible change of value even if the word
size increased, there should be a warning, or at least the option of
that.
C is intended to be a relatively low level language, and programmers
are assumed to know what they are doing. Traditionally warnings like
you mention might have been generated by lint, though I don't know
that it ever specifically did that one. Too much code has been
written assuming those conversions work. Note, for example, that

short i;
i=i+2;

would give a warning as i+2 is int, converted to short by assignment.

On the other hand, Java requires a cast for all narrowing assignments
as part of the language definition. That is sometimes inconvenient,
but mostly reminds the programmer to think before writing. Java is
not intended to be as (relatively) low level language as C.

-- glen

Jun 7 '07 #76

P: n/a
robert bristow-johnson <rb*@audioimagination.comwrites:
On Jun 7, 8:51 am, CBFalconer <cbfalco...@yahoo.comwrote:
>Keith Thompson wrote:
...
There is a *possible* reason to warn about the cast (not about the
assignment). Converting bigthing to shorttype can lose information.

Agreed, which is what I was trying to point out.

in my opinion, when a programmer uses an explicit cast, the compiler
should be able to assume the guy knows what he is doing (and no
warning necessary). it is the implicit conversions that happen when
one type is assigned to another type or passed as an argument that was
meant to be (and declared as) another type, that might need warnigs.
when we do that without an explicit cast (destination_type) operator,
if and only if there is a possible change of value even if the word
size increased, there should be a warning, or at least the option of
that.

this code:

unsigned long an_unsigned_long;
short a_short;
...
an_unsigned_long = a_short;

should generate such a warning, even if the bits get bigger.
I agree 100 percent.

And just to establish that I've "been around the C block," this year
marks my 18th year using the language. I've used compilers from multiple
vendors under multiple platforms.
--
% Randy Yates % "She has an IQ of 1001, she has a jumpsuit
%% Fuquay-Varina, NC % on, and she's also a telephone."
%%% 919-577-9882 %
%%%% <ya***@ieee.org % 'Yours Truly, 2095', *Time*, ELO
http://home.earthlink.net/~yatescr
Jun 7 '07 #77

P: n/a
On Tue, 05 Jun 2007 23:51:14 -0700, jaysome <ja*****@hotmail.com>
wrote:
On Tue, 05 Jun 2007 15:05:00 -0700, Ben Pfaff <bl*@cs.stanford.edu>
wrote:
This statement reflects some confusion about C definitions. In
C, a char is always one byte, in that sizeof(char) is always 1.
However, the size of a byte is implementation-defined: it may be
larger than one octet (though not smaller).

The C standard screwed up when it chose to use the term "byte" to
refer to what is really a storage unit. Everyone who owns a hard drive
knows that a byte is 8 bits. So do most programmers, regardless of
their programming language choice.
Not people who own (or owned) the drives or disks used on PDP-10 or
-6, PDP-8 or -12, GE-635/645/HIS6180, and at least some CDC machines.
And probably quite a few more, although some of the non-8-bit-byte
machines existed before disks became affordable or even possible.
The Ada standard got it right, because it chose to use the term
"storage unit" to refer to what the C standard refers to as a "byte".
Ada83 pre-dates C90 by enough years--go figure.
An IMNVHO underappreciated benefit of Ada is that its terminology and
particularly language keywords was carefully chosen -- admittedly
starting from a clean slate -- to not only be precise _and_ clear, but
to fit together well. IME it's the only serious language other than
COBOL in which one can write reasonably good code that also reads as
tolerable prose. (I consider Knuth's literate programming a
superstructure/methodology rather than a language as such.)

- formerly david.thompson1 || achar(64) || worldnet.att.net
Jul 1 '07 #78

P: n/a
David Thompson wrote:
An IMNVHO underappreciated benefit of Ada is that its terminology and
particularly language keywords was carefully chosen -- admittedly
starting from a clean slate -- to not only be precise _and_ clear, but
to fit together well. IME it's the only serious language other than
COBOL in which one can write reasonably good code that also reads as
tolerable prose. ...
Forth?

Jerry
--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Jul 1 '07 #79

P: n/a
Jean-Marc Bourguet wrote:
>
.... snip ...
>
The DEC PDP-10 had instructions to manipulate bytes (again the
architecture description use the term byte) whose width was
anything between 1 and 36 bits and was commonly used with ASCII
7 bits character (yes, there was a lost bit per word).
And the janitors made a fortune sweeping up those bits and hawking
them as PDP 11 memory. They were the heart of the bit serial PDP
11 model.

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>
<http://www.securityfocus.com/columnists/423>
<http://www.aaxnet.com/editor/edit043.html>
cbfalconer at maineline dot net

--
Posted via a free Usenet account from http://www.teranews.com

Jul 1 '07 #80

P: n/a
Jean-Marc Bourguet wrote:
(snip)
The historical meaning of byte is for sure this one:
"A group of bits sufficient to represent one character is called a _byte_
-- a term coined in 1958 by Werner Buchholz."
Computer Architecture, Concepts and Evolution
Gerrit A. Blaauw and Frederick P. Brooks, Jr.
(They give as reference a paper of 1959 which they co-authored with
Buchholz).
The IBM 7030 (aka Stretch) had instructions to manipulate bytes (the
architecture description use the term byte) whose width was anything
between 1 and 8 bits. BTW, Buchholz, Brooks and Blaauw where all three
part of the architecture team of Stretch. The character set designed for
stretch was the first 8 bits one I know of.
I have the book, I will have to look at it. I always thought that
EBCDIC was designed for S/360.
The DEC PDP-10 had instructions to manipulate bytes (again the architecture
description use the term byte) whose width was anything between 1 and 36
bits and was commonly used with ASCII 7 bits character (yes, there was a
lost bit per word).
The bit isn't lost if you have files with line numbers.

How much of the PDP-10 has heritage in the IBM 36 bit machines?

-- glen

Jul 2 '07 #81

P: n/a
Jerry Avins <jy*@ieee.orgwrote:
David Thompson wrote:
An IMNVHO underappreciated benefit of Ada is that its terminology and
particularly language keywords was carefully chosen -- admittedly
starting from a clean slate -- to not only be precise _and_ clear, but
to fit together well. IME it's the only serious language other than
COBOL in which one can write reasonably good code that also reads as
tolerable prose. ...

Forth?
Inform 7.

Richard
Jul 2 '07 #82

P: n/a

glen herrmannsfeldt <ga*@ugcs.caltech.eduwrites:
Jean-Marc Bourguet wrote:
(snip)
The historical meaning of byte is for sure this one:
"A group of bits sufficient to represent one character is called a _byte_
-- a term coined in 1958 by Werner Buchholz."
Computer Architecture, Concepts and Evolution Gerrit A. Blaauw and
Frederick P. Brooks, Jr.
(They give as reference a paper of 1959 which they co-authored with
Buchholz).
The IBM 7030 (aka Stretch) had instructions to manipulate bytes (the
architecture description use the term byte) whose width was anything
between 1 and 8 bits. BTW, Buchholz, Brooks and Blaauw where all three
part of the architecture team of Stretch. The character set designed for
stretch was the first 8 bits one I know of.

I have the book, I will have to look at it. I always thought that EBCDIC
was designed for S/360.
EBCDIC was designed for S/360. The character set designed for Stretch was
pecular. For example, it is with Baudot the only character set I know of
for which the digits are not consecutive.
The DEC PDP-10 had instructions to manipulate bytes (again the architecture
description use the term byte) whose width was anything between 1 and 36
bits and was commonly used with ASCII 7 bits character (yes, there was a
lost bit per word).

The bit isn't lost if you have files with line numbers.
Thanks for reminding me that.
How much of the PDP-10 has heritage in the IBM 36 bit machines?
I don't know. I've copied comp.arch and alt.folklore.computers and set the
follow up there, people there probably know that.

Yours,

--
Jean-Marc
Jul 2 '07 #83

82 Replies

This discussion thread is closed

Replies have been disabled for this discussion.