By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
454,937 Members | 1,148 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 454,937 IT Pros & Developers. It's quick & easy.

The result of 1 << 32

P: n/a
All,

Here is one question that I can not understand.

int main()
{
int i = 32;
printf("%p %p\n", 1<<i, 1<<32);
}

The result is:
0x1 0x0

("int" type on my machine is 32-bit)

Why 1<<i and 1<<32 get different result? I had thought
both of them would get 0x0.

--
B.Y.

Nov 15 '05 #1
Share this Question
Share on Google+
18 Replies


P: n/a
On 2005-10-29, mrby <bi******@gmail.com> wrote:
All,

Here is one question that I can not understand.

int main()
{
int i = 32;
printf("%p %p\n", 1<<i, 1<<32);
}

The result is:
0x1 0x0

("int" type on my machine is 32-bit)

Why 1<<i and 1<<32 get different result? I had thought
both of them would get 0x0.


A theory: the 1<<32 is calculated at compile time [and the compiler
itself is smart enough to do this] while the 1<<i value is
calculated at runtime - now, if your processor's LSH instruction
only accepts a 5-bit immediate operand.

According to the standard, shifting by a number greater than or
equal to the field width is undefined.

A.6.2 Undefined Behavior

* An expression is shifted by a negative number or by an amount
greater than or equal to the width in bits of the expression
being shifted (3.3.7).
Nov 15 '05 #2

P: n/a
You explanation makes sense! I do appreciate it.

--
B.Y.

Nov 15 '05 #3

P: n/a
On Sat, 29 Oct 2005 15:41:20 +0000, Jordan Abel wrote:
On 2005-10-29, mrby <bi******@gmail.com> wrote:
All,

Here is one question that I can not understand.

int main()
{
int i = 32;
printf("%p %p\n", 1<<i, 1<<32);
}

The result is:
0x1 0x0

("int" type on my machine is 32-bit)

Why 1<<i and 1<<32 get different result? I had thought
both of them would get 0x0.


A theory: the 1<<32 is calculated at compile time [and the compiler
itself is smart enough to do this] while the 1<<i value is


So you're assuming wrap-around, which is reasonable but not the only
possibility. As you quoted from the standard, this is undefined behaviour
- all bets are off and any result's possible.

To add to that, the printf format specifier used is that of a pointer. It
should be %d.

[...]
--
http://members.dodo.com.au/~netocrat
Nov 15 '05 #4

P: n/a
On 2005-10-29, Netocrat <ne******@dodo.com.au> wrote:
On Sat, 29 Oct 2005 15:41:20 +0000, Jordan Abel wrote:
On 2005-10-29, mrby <bi******@gmail.com> wrote:
All,

Here is one question that I can not understand.

int main()
{
int i = 32;
printf("%p %p\n", 1<<i, 1<<32);
}

The result is:
0x1 0x0

("int" type on my machine is 32-bit)

Why 1<<i and 1<<32 get different result? I had thought
both of them would get 0x0.
A theory: the 1<<32 is calculated at compile time [and the
compiler itself is smart enough to do this] while the 1<<i value
is


So you're assuming wrap-around, which is reasonable but not the
only possibility. As you quoted from the standard, this is
undefined behaviour - all bets are off and any result's possible.


I was giving a possible reason why this might have happened.
"Because it's undefined" isn't an answer. Things do happen for a
reason - there's a reason the value was 1 instead of 42 or
0xdeadbeef or crashing or starting up a game of nethack. The
standard gives a lot of latitude, but compiler authors tend not to
be as capricious as they could be. It's also a possible reason why
this was allowed to be undefined, rather than mandating one
particular behavior or another.
To add to that, the printf format specifier used is that of a
pointer. It should be %d.
oops, didn't notice that part
[...]

Nov 15 '05 #5

P: n/a
On Sat, 29 Oct 2005 16:49:49 +0000 (UTC), in comp.lang.c , Jordan Abel
<jm****@purdue.edu> wrote:
I was giving a possible reason why this might have happened.
"Because it's undefined" isn't an answer.


Its a good idea to explain whats going on. However I find it better to
do it the other way round:

"This is undefined behaviour, which means your compiler isn't actually
required to handle it sensibly. Typically however compiler writers
will implement wrap-around or..."

This ensures your audience learn /first/ that its UB and naughty,
/then/ about how it might be handled. This avoids the risk that they
leave after para 1, and think that /all/ compilers implement
wraparound.
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>

----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Nov 15 '05 #6

P: n/a
In article <q6********************************@4ax.com>,
Mark McIntyre <ma**********@spamcop.net> wrote:
"This is undefined behaviour, which means your compiler isn't actually
required to handle it sensibly. Typically however compiler writers
will implement wrap-around or..." This ensures your audience learn /first/ that its UB and naughty,
/then/ about how it might be handled. This avoids the risk that they
leave after para 1, and think that /all/ compilers implement
wraparound.


I'd like to emphasize Jordan's other point, which you snipped:
It's also a possible reason why
this was allowed to be undefined, rather than mandating one
particular behavior or another.


Explaining the plausible implementations often (as in this case)
explains why the behaviour is undefined. Understanding that is better
than just knowing that it's undefined.

-- Richard
Nov 15 '05 #7

P: n/a
In article <sl*******************@random.yi.org>,
Jordan Abel <jm****@purdue.edu> wrote:
On 2005-10-29, mrby <bi******@gmail.com> wrote:
Why 1<<i and 1<<32 get different result? I had thought
both of them would get 0x0.

A theory: the 1<<32 is calculated at compile time [and the compiler
itself is smart enough to do this] while the 1<<i value is
calculated at runtime - now, if your processor's LSH instruction
only accepts a 5-bit immediate operand. According to the standard, shifting by a number greater than or
equal to the field width is undefined.


I don't have my reference material here, but I seem to recall that
constant calculations are supposed to be done "as if" they were
done at run-time. Possibly that only applied to calculations in
preprocessor expressions.

If my recollection is correct, then even though the shift behaviour
is undefined, would it not be required to be consistant?
--
"It is important to remember that when it comes to law, computers
never make copies, only human beings make copies. Computers are given
commands, not permission. Only people can be given permission."
-- Brad Templeton
Nov 15 '05 #8

P: n/a
Walter Roberson wrote:
In article <sl*******************@random.yi.org>,
Jordan Abel <jm****@purdue.edu> wrote:
On 2005-10-29, mrby <bi******@gmail.com> wrote:

Why 1<<i and 1<<32 get different result? I had thought
both of them would get 0x0.

A theory: the 1<<32 is calculated at compile time [and the compiler
itself is smart enough to do this] while the 1<<i value is
calculated at runtime - now, if your processor's LSH instruction
only accepts a 5-bit immediate operand.

According to the standard, shifting by a number greater than or
equal to the field width is undefined.


I don't have my reference material here, but I seem to recall that
constant calculations are supposed to be done "as if" they were
done at run-time. Possibly that only applied to calculations in
preprocessor expressions.

If my recollection is correct, then even though the shift behaviour
is undefined, would it not be required to be consistant?


No, the result of undefined behaviour is not required to be consistent.
If undefined behaviour had to be consistent then that would effectively
mean having to detect buffer overruns, otherwise how could it provide
consistent behaviour on the undefined behaviour caused by writing beyond
the end of a buffer?
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Nov 15 '05 #9

P: n/a
On 2005-10-30, Walter Roberson <ro******@ibd.nrc-cnrc.gc.ca> wrote:
I don't have my reference material here, but I seem to recall that
constant calculations are supposed to be done "as if" they were
done at run-time. Possibly that only applied to calculations in
preprocessor expressions.
It does not apply to undefined behavior.
If my recollection is correct, then even though the shift
behaviour is undefined, would it not be required to be consistant?


There are no rules which apply to undefined behavior. Were this
unspecified or implementation-defined, that would be true.
Nov 15 '05 #10

P: n/a
In article <dk**********@canopus.cc.umanitoba.ca>,
ro******@ibd.nrc-cnrc.gc.ca (Walter Roberson) wrote:
In article <sl*******************@random.yi.org>,
Jordan Abel <jm****@purdue.edu> wrote:
On 2005-10-29, mrby <bi******@gmail.com> wrote:

Why 1<<i and 1<<32 get different result? I had thought
both of them would get 0x0.

A theory: the 1<<32 is calculated at compile time [and the compiler
itself is smart enough to do this] while the 1<<i value is
calculated at runtime - now, if your processor's LSH instruction
only accepts a 5-bit immediate operand.

According to the standard, shifting by a number greater than or
equal to the field width is undefined.


I don't have my reference material here, but I seem to recall that
constant calculations are supposed to be done "as if" they were
done at run-time. Possibly that only applied to calculations in
preprocessor expressions.

If my recollection is correct, then even though the shift behaviour
is undefined, would it not be required to be consistant?


"Undefined behavior" includes permission to be inconsistent. If the
hardware produces a random number as a result of trying to calculate 1
<< 32, then the compiler can do the same thing at compile-time.

In many implementations, it would be possible to define the behavior of
x << y for arbitrary values of y. If the implementation defines the
behavior, then it would implement it both at compile time and at runtime
consistently with that definition.

If the hardware behaves consistently, but the implementation doesn't
actually _define_ that behavior, then all bets are off. However, this
could be the source of an incredibly hard to find bug. For example, in x
<< y a compiler might figure out that y always has the same value at a
high enough optimisation level. So in the example

int i = 32;
int result = 1 << i;

a compiler might do a shift at runtime when it is not optimising, and do
a shift at compile time when it is optimising. Good luck finding the bug
if that happens.
Nov 15 '05 #11

P: n/a
On 29 Oct 2005 23:23:24 GMT, in comp.lang.c , ri*****@cogsci.ed.ac.uk
(Richard Tobin) wrote:
In article <q6********************************@4ax.com>,
Mark McIntyre <ma**********@spamcop.net> wrote:
This ensures your audience learn /first/ that its UB and naughty,
/then/ about how it might be handled.


Explaining the plausible implementations often (as in this case)
explains why the behaviour is undefined. Understanding that is better
than just knowing that it's undefined.


.... and if you actually read what I say, you'll see that I am still
advocating supplying both bits of information, as you suggest.

My point was that if you stress first that its UB, then explain likely
implementation-specific solutions, you ensure that newbies leave
knowing they can't rely on it.

I've noticed that undergrad level newbies will read para 1 then doze
off or start to play with their mobe, and so miss the rest of the
Lesson...
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>

----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Nov 15 '05 #12

P: n/a
In article <q1********************************@4ax.com>,
Mark McIntyre <ma**********@spamcop.net> wrote:
... and if you actually read what I say


I did read it, but I still wanted to draw attention to a specific
reason for talking about the implementation.

-- Richard
Nov 15 '05 #13

P: n/a
On 30 Oct 2005 16:51:44 GMT, in comp.lang.c , ri*****@cogsci.ed.ac.uk
(Richard Tobin) wrote:
In article <q1********************************@4ax.com>,
Mark McIntyre <ma**********@spamcop.net> wrote:
... and if you actually read what I say


I did read it, but I still wanted to draw attention to a specific
reason for talking about the implementation.


okay, we're on the same wavelenth.
--
Mark McIntyre
CLC FAQ <http://www.eskimo.com/~scs/C-faq/top.html>
CLC readme: <http://www.ungerhu.com/jxh/clc.welcome.txt>

----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Nov 15 '05 #14

P: n/a
Jordan Abel wrote:
On 2005-10-30, Walter Roberson <ro******@ibd.nrc-cnrc.gc.ca> wrote:
I don't have my reference material here, but I seem to recall that
constant calculations are supposed to be done "as if" they were
done at run-time. Possibly that only applied to calculations in
preprocessor expressions.


It does not apply to undefined behavior.
If my recollection is correct, then even though the shift
behaviour is undefined, would it not be required to be consistant?


There are no rules which apply to undefined behavior. Were this
unspecified or implementation-defined, that would be true.


It is not true for unspecified behaviour either. The code

foo( f(), g() );

could call f and g in different orders depending on what day
of the week it is.

Nov 15 '05 #15

P: n/a
Flash Gordon wrote:
No, the result of undefined behaviour is not required to be consistent.
If undefined behaviour had to be consistent then that would effectively
mean having to detect buffer overruns, otherwise how could it provide
consistent behaviour on the undefined behaviour caused by writing beyond
the end of a buffer?


Are there real examples where it's not consistent? Buffer overruns
generally are, provided the data written is the same in all cases. I'm
curious to learn of any times when one time the daemons come out the
left nostril, and the next time the right, so to speak.

Nov 15 '05 #16

P: n/a
tedu wrote:
Flash Gordon wrote:
No, the result of undefined behaviour is not required to be consistent.
If undefined behaviour had to be consistent then that would effectively
mean having to detect buffer overruns, otherwise how could it provide
consistent behaviour on the undefined behaviour caused by writing beyond
the end of a buffer?


Are there real examples where it's not consistent? Buffer overruns
generally are, provided the data written is the same in all cases. I'm
curious to learn of any times when one time the daemons come out the
left nostril, and the next time the right, so to speak.


I've come across numerous instances where changing the optimisation
level, or a compiler switch, or a line of code somewhere aparantly
completely unrelated has changed the visible effect of undefined
behaviour. I've come across code with undefined behaviour that will
crash 1 run in 10. I've come across instances where the program would
apparantly work when run inside a debugger but not if run standalone.

The thing is that with a lot of applications in the real world things
*don't* happen exactly the same every run.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.
Nov 15 '05 #17

P: n/a
>Flash Gordon wrote:
... the result of undefined behaviour is not required to be consistent.

In article <11**********************@f14g2000cwb.googlegroups .com>
tedu <tu@zeitbombe.org> wrote:Are there real examples where it's not consistent? Buffer overruns
generally are, provided the data written is the same in all cases.


I assume you mean "generally are consistent". This is true on
some systems and not on others. In particular, systems in which
C programs are run in a "virtual address space" that is initialized
the same way on each run tend to act consistently (which is very
nice for debugging). Those that do not, tend not to.

The same applies to, e.g., use of uninitialized local variables:

#include <stdio.h>

int main(void) {
int i;
printf("uninitialized i = %d\n", i);
return 0;
}

This program tends to produce consistent output (though not always
0) on Unix-like systems, including Linux, but not on old microcomputers
(where the contents of RAM and/or registers on each program launch
tend to depend on what was run before). (But even on Linux, you
may be able to arrange different output from one run to the next,
even without recompiling, by fiddling with dynamic linker options.)
--
In-Real-Life: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (4039.22'N, 11150.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.
Nov 15 '05 #18

P: n/a
On 29 Oct 2005 08:25:11 -0700, "mrby" <bi******@gmail.com> wrote:
All,

Here is one question that I can not understand.

int main()
{
int i = 32;
printf("%p %p\n", 1<<i, 1<<32);
%p is the format for printing a void*, not for printing an int.
}

The result is:
0x1 0x0

("int" type on my machine is 32-bit)

Why 1<<i and 1<<32 get different result? I had thought
both of them would get 0x0.

<<Remove the del for email>>
Nov 15 '05 #19

This discussion thread is closed

Replies have been disabled for this discussion.