468,463 Members | 2,021 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,463 developers. It's quick & easy.

How many bits are '1' in a integer variable?

Hi folks,

I am trying to find a more efficient way to count "How many bits are
'1' in a integer variable?".

I still have no idea to count the bits except using a loop and "if"
statements.
Could you know any other more efficient way?

Cuthbert

int main (void)
{
int var = 0xFF0F;
int i, count = 0;
int mask = 1;
for ( i = 0; i < sizeof(int)*8 ; i++ )
if ( mask<<i & var) count++ ;

printf("%d\n", count);

return 0;
}

Sep 12 '06 #1
34 10817
Cuthbert wrote:
Hi folks,

I am trying to find a more efficient way to count "How many bits are
'1' in a integer variable?".

I still have no idea to count the bits except using a loop and "if"
statements.
Could you know any other more efficient way?
Break the int into bytes and use a lookup table.

--
Thomas M. Sommers -- tm*@nj.net -- AB2SB

Sep 12 '06 #2
T.M. Sommers posted:
Cuthbert wrote:
>Hi folks,

I am trying to find a more efficient way to count "How many bits are
'1' in a integer variable?".

I still have no idea to count the bits except using a loop and "if"
statements.
Could you know any other more efficient way?

Break the int into bytes and use a lookup table.

I'm sure there's a better way than that... perhaps even an expression
yielding a compile-time constant. Maybe something like:
#define UINT_BITS_SET(x)\
!!((x)&1U) + !!((x)&1U<<1) + !!((x)&1U<<2) + !!((x)&1U<<3)\
+ !!((x)&1U<<4) + !!((x)&1U<<5) + !!((x)&1U<<6) + !!((x)&1U<<7)\
+ !!((x)&1U<<8) + !!((x)&1U<<9) + !!((x)&1U<<10) + !!((x)&1U<<11)\
+ !!((x)&1U<<12) + !!((x)&1U<<13) + !!((x)&1U<<14) + !!((x)&1U<<15)\
+ !!((x)&1U<<16) + !!((x)&1U<<17) + !!((x)&1U<<18) + !!((x)&1U<<19)\
+ !!((x)&1U<<20)

I'm not sure if it's undefined behaviour to left shift an unsigned int by
more places than it has bits; if not, then you can simply go as high as you
want:

+ !!((x)&1U<<63548)

If it is however, then you could use the IMAX_BITS macro in conjunction
with the modulus and division operators to keep things safe.

--

Frederick Gotham
Sep 12 '06 #3

Cuthbert wrote:
Hi folks,
a bit simpler and therefore perhaps somewhat faster:

int main (void)
{
int var = 0xFF0F;
int i, count = 0;

for ( i = 0; i < sizeof(int)*8 ; i++ )
{ count += var & 1; var >>= 1; }

printf("%d\n", count);

return 0;
}

a byte lookup table might or might not be faster, depending on what
happens with the cache.

Sep 12 '06 #4
In article <11**********************@d34g2000cwd.googlegroups .com>,
Ancient_Hacker <gr**@comcast.netwrote:
>Cuthbert wrote:
>Hi folks,
Please quote enough context so that people know what is being discussed.
In this case it is counting the number of 1 bits in an integer variable.
>a bit simpler and therefore perhaps somewhat faster:
>int main (void)
{
int var = 0xFF0F;
int i, count = 0;
for ( i = 0; i < sizeof(int)*8 ; i++ )
{ count += var & 1; var >>= 1; }
printf("%d\n", count);

return 0;
}
Why sizeof(int)*8 ?

It appears to me that you have assumed CHAR_BIT to be 8.
--
"law -- it's a commodity"
-- Andrew Ryan (The Globe and Mail, 2005/11/26)
Sep 12 '06 #5
Ancient_Hacker wrote:
a byte lookup table might or might not be faster, depending on what
happens with the cache.
My favorite approach that I've seen to this (Homework) problem involved
a combination of table lookup (for all the 4-bit unch ofcombinations)
and a series of shifts.

Pretty good compromise.

Here are a few different ways to do it.
http://graphics.stanford.edu/~seande...ntBitsSetNaive
Sep 12 '06 #6
Frederick Gotham said:
T.M. Sommers posted:
>Cuthbert wrote:
>>Hi folks,

I am trying to find a more efficient way to count "How many bits are
'1' in a integer variable?".

I still have no idea to count the bits except using a loop and "if"
statements.
Could you know any other more efficient way?

Break the int into bytes and use a lookup table.


I'm sure there's a better way than that... perhaps even an expression
yielding a compile-time constant. Maybe something like:
#define UINT_BITS_SET(x)\
!!((x)&1U) + !!((x)&1U<<1) + !!((x)&1U<<2) + !!((x)&1U<<3)\
+ !!((x)&1U<<4) + !!((x)&1U<<5) + !!((x)&1U<<6) + !!((x)&1U<<7)\
+ !!((x)&1U<<8) + !!((x)&1U<<9) + !!((x)&1U<<10) + !!((x)&1U<<11)\
+ !!((x)&1U<<12) + !!((x)&1U<<13) + !!((x)&1U<<14) + !!((x)&1U<<15)\
+ !!((x)&1U<<16) + !!((x)&1U<<17) + !!((x)&1U<<18) + !!((x)&1U<<19)\
+ !!((x)&1U<<20)
I'm not convinced that's a better way than:

unsigned int count_set_bits(unsigned long n)
{
unsigned int set_bits[] =
{
0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4
};
unsigned int count = 0;
while(n 0)
{
count += set_bits[n & 0xf];
n >>= 4;
}
return count;
}

Your solution doesn't appear to cope with integers wider than 21 bits, for a
start. Secondly, it doesn't cope with integers that are /fewer/ than 21
bits wide! Thirdly, it could conceivably be counting "padding bits", bits
that do not contribute to the value of the integer.
I'm not sure if it's undefined behaviour to left shift an unsigned int by
more places than it has bits; if not, then you can simply go as high as
you want:
Quoth 3.3.7 of C90: "The integral promotions are performed on each of the
operands. The type of the result is that of the promoted left operand. If
the value of the right operand is negative or is greater than or equal to
the width in bits of the promoted left operand, the behavior is undefined."

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Sep 12 '06 #7
Cuthbert wrote:
Hi folks,

I am trying to find a more efficient way to count "How many bits are
'1' in a integer variable?".
http://graphics.stanford.edu/~seander/bithacks.html

Pay close attention, though; most of these hacks assume integer sizes, two's
complement machines, or other things that may be quite justifiable, but
nevertheless render the code unportable.

S.
Sep 12 '06 #8
Skarmander wrote:
Cuthbert wrote:
>Hi folks,

I am trying to find a more efficient way to count "How many bits are
'1' in a integer variable?".
http://graphics.stanford.edu/~seander/bithacks.html

Pay close attention, though; most of these hacks assume integer sizes,
two's complement machines, or other things that may be quite
justifiable, but nevertheless render the code unportable.

S.
I believe many of these hacks are in "Hacker's Delight" by
Henry Warren. As is the answer to the OP's question.

--
Julian V. Noble
Professor Emeritus of Physics
University of Virginia
Sep 12 '06 #9
Cuthbert wrote:
>
Hi folks,

I am trying to find a more efficient way to count "How many bits are
'1' in a integer variable?".

I still have no idea to count the bits except using a loop and "if"
statements.
Could you know any other more efficient way?

Cuthbert

int main (void)
{
int var = 0xFF0F;
int i, count = 0;
int mask = 1;
for ( i = 0; i < sizeof(int)*8 ; i++ )
if ( mask<<i & var) count++ ;

printf("%d\n", count);

return 0;
}
/* BEGIN new.c */

#include <stdio.h>

unsigned bit_count(unsigned n);

int main (void)
{
unsigned var = 0xFF0F;
unsigned count = bit_count(var);

printf("%u\n", count);
return 0;
}

unsigned bit_count(unsigned n)
{
unsigned count;

for (count = 0; n != 0; n &= n - 1) {
++count;
}
return count;
}

/* END new.c */
--
pete
Sep 12 '06 #10

Cuthbert wrote:
Hi folks,

I am trying to find a more efficient way to count "How many bits are
'1' in a integer variable?".

I still have no idea to count the bits except using a loop and "if"
statements.
Could you know any other more efficient way?

Cuthbert

int main (void)
{
int var = 0xFF0F;
int i, count = 0;
int mask = 1;
for ( i = 0; i < sizeof(int)*8 ; i++ )
if ( mask<<i & var) count++ ;

printf("%d\n", count);

return 0;
}
There is a faster program easy to remember and i.e x&=(x-1) and doesn't
require any storage requirements. Here is the program

int main(void)
{
x = 0xFF0F;
int n = 0;

while(x)
{
x &= (x-1)
n++;
}

printf("No of bits = %d\n", n);
}

The worst case for the loop would be when all the bits are set.
Actually for each iteration the statement x=x&(x-1) would remove one
bit. Just check it out manually.

There is even better and faster algorithm but difficult to remember.

#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)

int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
return n % 255 ;
}

I got this algorithm from my old book, hard to remember (atleast for me)

Sep 13 '06 #11
On 12 Sep 2006 13:48:55 -0700, "Ancient_Hacker" <gr**@comcast.net>
wrote:
>
Cuthbert wrote:
>Hi folks,
a bit simpler and therefore perhaps somewhat faster:

int main (void)
{
int var = 0xFF0F;
int i, count = 0;

for ( i = 0; i < sizeof(int)*8 ; i++ )
{ count += var & 1; var >>= 1; }

printf("%d\n", count);

return 0;
}

a byte lookup table might or might not be faster, depending on what
happens with the cache.
On the TI 5501 DSP I'm currently working with, where sizeof(int) == 1
and CHAR_BIT == 16 (and the byte ordering is Big Endian, to boot), I'm
skeptical that this would work.

Use of a lookup table could easily result in exceeding the size of my
available EEPROM that stores my program memory (and that assumes that
you constructed the table with a priori knowledge that CHAR_BIT ==
16).

The least of my worries about this code may well be what happens with
the cache.

My PC-lint warnings make me even more twitchy about this code:

STD_FILE: std_5501.lnt
PC-lint for C/C++ (NT) Vers. 8.00u, Copyright Gimpel Software
1985-2006

--- Module: bits.c (C)
_
int var = 0xFF0F;
bits.c(3) : Warning 569: Loss of information (initialization) (16 bits
to 15 bits)
_
for ( i = 0; i < sizeof(int)*8 ; i++ )
bits.c(6) : Warning 574: Signed-unsigned mix with relational
_
{ count += var & 1; var >>= 1; }
bits.c(7) : Info 702: Shift right of signed quantity (int)
_
printf("%d\n", count);
bits.c(9) : Info 718: Symbol 'printf' undeclared, assumed to return
int
bits.c(9) : Info 746: call to function 'printf()' not made in the
presence of a prototype

/// Start of Pass 2 ///

--- Module: bits.c (C)

--- Global Wrap-up

Note 900: Successful completion, 5 messages produced
Tool returned code: 0

Regards
--
jay
Sep 13 '06 #12
Cuthbert <cu**********@gmail.comwrote:
Hi folks,
I am trying to find a more efficient way to count "How many bits are
'1' in a integer variable?".
I still have no idea to count the bits except using a loop and "if"
statements.
Could you know any other more efficient way?
There are a couple.

E.g. when you take the 2's complement of a number, you do not complement
the lowest-order-bit. Using (-x)&x is therefore a way of "skiiping to
the next bit that's set" rather than having to go through bit positions
1 at a time -- 1/2 the time finding a 0.
Then there's the mysterious method that relies on carry in 2s complement
allowing you to "count" the 1's directly.
It's a recursive method (which you later can flatten out for particular
sizes of your compiuter's native word) based on the idea "how many bits
are set in a 2 bit quantity b1b2". To find out how many in this 2-bit
problem, you want to add b1 and b2. I.e. (x&1)+((x>>1)&1).
Now, what if we have (say) 4 bits b1b2b3b4? We can use
(x&5)+((x>>1)&5) to get [sum b1,b2][sum b3,b4] -- i.e. 2 x 2-bit quantities,
and then use
(x&3)+((x>>2)&3) to convert that to [sum b1,b2,b3,b4].
Just build up logically, as the old short story said. :)
Sep 13 '06 #13
kondal said:

<snip>
There is even better and faster algorithm but difficult to remember.

#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)

int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
return n % 255 ;
}
What makes you think this is faster? And what happens when the code is
ported to a platform with 64-bit unsigned ints?

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Sep 13 '06 #14
Richard Heathfield wrote:
kondal said:

<snip>
There is even better and faster algorithm but difficult to remember.

#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)

int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
return n % 255 ;
}

What makes you think this is faster? And what happens when the code is
ported to a platform with 64-bit unsigned ints?

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
There is no loop here to check and no of clock cycles to be executed
would be fas less than my previous algorithm.

Here is the program for 64 bits

#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)
#define MASK_00111100 (((unsigned int)(-1))/257)

int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
n = (n & MASK_00111100) + ((n >8) & MASK_00111100) ;
return n % 255 ;
}

-kondal

Sep 13 '06 #15
kondal wrote:
>
#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)
#define MASK_00111100 (((unsigned int)(-1))/257)

int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
n = (n & MASK_00111100) + ((n >8) & MASK_00111100) ;
return n % 255 ;
}
Sorry I forgot, 'unsigned int' should be replaced by to 'unsigned long
long int'

-kondal

Sep 13 '06 #16
kondal said:
Richard Heathfield wrote:
>kondal said:

<snip>
There is even better and faster algorithm but difficult to remember.

#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)

int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
return n % 255 ;
}

What makes you think this is faster? And what happens when the code is
ported to a platform with 64-bit unsigned ints?
<snip>
>
Here is the program for 64 bits

#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)
#define MASK_00111100 (((unsigned int)(-1))/257)

int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
n = (n & MASK_00111100) + ((n >8) & MASK_00111100) ;
return n % 255 ;
}
What happens when the code is ported to a platform with 180-bit ints?

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Sep 13 '06 #17
Richard Heathfield posted:
Your solution doesn't appear to cope with integers wider than 21 bits,
for a start. Secondly, it doesn't cope with integers that are /fewer/
than 21 bits wide! Thirdly, it could conceivably be counting "padding
bits", bits that do not contribute to the value of the integer.
<snip>
"If the value of the right operand is negative or is greater
than or equal to the width in bits of the promoted left operand, the
behavior is undefined."

I meant something more along the following lines. Do compilers nowadays
optimise away Bitwise-AND operations when one of the operands is known to
be 0 at compile-time?

/* Macro: QUANT_BITS_SET

This macro determines the quantity of bits which
are set in an integer expression whose signedness
is unsigned.

The typedef, "BitsSetType", specifies the unsigned
integer type which is to be used for processing.

Before processing takes place, the argument is
converted to BitsSetType.

This macro should work with unsigned integer types
as wide as 1024 bits.

NB: This macro evaluates its argument more than once.
*/

typedef unsigned BitsSetType;

#define IMAX_BITS(m) (\
(m) /((m)%0x3fffffffL+1) /0x3fffffffL \
%0x3fffffffL *30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 \
+ 4 \
- 12 / ((m)%31+3) )

#define MASK_ZERO_IF_OVER(shift_left_by) (\
shift_left_by < IMAX_BITS((BitsSetType)-1) \
? (BitsSetType)1 << shift_left_by \
: (BitsSetType)0 )

#define QUANT_BITS_SET_RAW(x) (\
!!((x)&(BitsSetType)1) + !!((x)&(BitsSetType)1<<1) \
+!!((x)&(BitsSetType)1<<2) + !!((x)&(BitsSetType)1<<3) \
+!!((x)&(BitsSetType)1<<4) + !!((x)&(BitsSetType)1<<5) \
+!!((x)&(BitsSetType)1<<6) + !!((x)&(BitsSetType)1<<7) \
+!!((x)&(BitsSetType)1<<8) + !!((x)&(BitsSetType)1<<9) \
+!!((x)&(BitsSetType)1<<10) +!!((x)&(BitsSetType)1<<11) \
+!!((x)&(BitsSetType)1<<12) +!!((x)&(BitsSetType)1<<13) \
+!!((x)&(BitsSetType)1<<14) +!!((x)&(BitsSetType)1<<15) \
+!!((x)&MASK_ZERO_IF_OVER(16))+!!((x)&MASK_ZERO_IF _OVER(17))\
+!!((x)&MASK_ZERO_IF_OVER(18))+!!((x)&MASK_ZERO_IF _OVER(19))\
+!!((x)&MASK_ZERO_IF_OVER(20))+!!((x)&MASK_ZERO_IF _OVER(21))\
+!!((x)&MASK_ZERO_IF_OVER(22))+!!((x)&MASK_ZERO_IF _OVER(23))\
+!!((x)&MASK_ZERO_IF_OVER(24))+!!((x)&MASK_ZERO_IF _OVER(25))\
+!!((x)&MASK_ZERO_IF_OVER(26))+!!((x)&MASK_ZERO_IF _OVER(27))\
+!!((x)&MASK_ZERO_IF_OVER(28))+!!((x)&MASK_ZERO_IF _OVER(29))\
+!!((x)&MASK_ZERO_IF_OVER(30))+!!((x)&MASK_ZERO_IF _OVER(31))\
+!!((x)&MASK_ZERO_IF_OVER(32))+!!((x)&MASK_ZERO_IF _OVER(33))\
+!!((x)&MASK_ZERO_IF_OVER(34))+!!((x)&MASK_ZERO_IF _OVER(35))\
+!!((x)&MASK_ZERO_IF_OVER(36))+!!((x)&MASK_ZERO_IF _OVER(37))\
+!!((x)&MASK_ZERO_IF_OVER(38))+!!((x)&MASK_ZERO_IF _OVER(39))\
+!!((x)&MASK_ZERO_IF_OVER(40))+!!((x)&MASK_ZERO_IF _OVER(41))\
+!!((x)&MASK_ZERO_IF_OVER(42))+!!((x)&MASK_ZERO_IF _OVER(43))\
+!!((x)&MASK_ZERO_IF_OVER(44))+!!((x)&MASK_ZERO_IF _OVER(45))\
+!!((x)&MASK_ZERO_IF_OVER(46))+!!((x)&MASK_ZERO_IF _OVER(47))\
+!!((x)&MASK_ZERO_IF_OVER(48))+!!((x)&MASK_ZERO_IF _OVER(49))\
+!!((x)&MASK_ZERO_IF_OVER(50))+!!((x)&MASK_ZERO_IF _OVER(51))\
+!!((x)&MASK_ZERO_IF_OVER(52))+!!((x)&MASK_ZERO_IF _OVER(53))\
+!!((x)&MASK_ZERO_IF_OVER(54))+!!((x)&MASK_ZERO_IF _OVER(55))\
+!!((x)&MASK_ZERO_IF_OVER(56))+!!((x)&MASK_ZERO_IF _OVER(57))\
+!!((x)&MASK_ZERO_IF_OVER(58))+!!((x)&MASK_ZERO_IF _OVER(59))\
+!!((x)&MASK_ZERO_IF_OVER(60))+!!((x)&MASK_ZERO_IF _OVER(61))\
+!!((x)&MASK_ZERO_IF_OVER(62))+!!((x)&MASK_ZERO_IF _OVER(63))\
+!!((x)&MASK_ZERO_IF_OVER(64))+!!((x)&MASK_ZERO_IF _OVER(65))\
+!!((x)&MASK_ZERO_IF_OVER(66))+!!((x)&MASK_ZERO_IF _OVER(67))\
+!!((x)&MASK_ZERO_IF_OVER(68))+!!((x)&MASK_ZERO_IF _OVER(69))\
+!!((x)&MASK_ZERO_IF_OVER(70))+!!((x)&MASK_ZERO_IF _OVER(71))\
+!!((x)&MASK_ZERO_IF_OVER(72))+!!((x)&MASK_ZERO_IF _OVER(73))\
+!!((x)&MASK_ZERO_IF_OVER(74))+!!((x)&MASK_ZERO_IF _OVER(75))\
+!!((x)&MASK_ZERO_IF_OVER(76))+!!((x)&MASK_ZERO_IF _OVER(77))\
+!!((x)&MASK_ZERO_IF_OVER(78))+!!((x)&MASK_ZERO_IF _OVER(79))\
+!!((x)&MASK_ZERO_IF_OVER(80))+!!((x)&MASK_ZERO_IF _OVER(81))\
+!!((x)&MASK_ZERO_IF_OVER(82))+!!((x)&MASK_ZERO_IF _OVER(83))\
+!!((x)&MASK_ZERO_IF_OVER(84))+!!((x)&MASK_ZERO_IF _OVER(85))\
+!!((x)&MASK_ZERO_IF_OVER(86))+!!((x)&MASK_ZERO_IF _OVER(87))\
+!!((x)&MASK_ZERO_IF_OVER(88))+!!((x)&MASK_ZERO_IF _OVER(89))\
+!!((x)&MASK_ZERO_IF_OVER(90))+!!((x)&MASK_ZERO_IF _OVER(91))\
+!!((x)&MASK_ZERO_IF_OVER(92))+!!((x)&MASK_ZERO_IF _OVER(93))\
+!!((x)&MASK_ZERO_IF_OVER(94))+!!((x)&MASK_ZERO_IF _OVER(95))\
+!!((x)&MASK_ZERO_IF_OVER(96))+!!((x)&MASK_ZERO_IF _OVER(97))\
+!!((x)&MASK_ZERO_IF_OVER(98))+!!((x)&MASK_ZERO_IF _OVER(99))\
+!!((x)&MASK_ZERO_IF_OVER(100))+!!((x)&MASK_ZERO_I F_OVER(101))\
+!!((x)&MASK_ZERO_IF_OVER(102))+!!((x)&MASK_ZERO_I F_OVER(103))\
+!!((x)&MASK_ZERO_IF_OVER(104))+!!((x)&MASK_ZERO_I F_OVER(105))\
+!!((x)&MASK_ZERO_IF_OVER(106))+!!((x)&MASK_ZERO_I F_OVER(107))\
+!!((x)&MASK_ZERO_IF_OVER(108))+!!((x)&MASK_ZERO_I F_OVER(109))\
+!!((x)&MASK_ZERO_IF_OVER(110))+!!((x)&MASK_ZERO_I F_OVER(111))\
+!!((x)&MASK_ZERO_IF_OVER(112))+!!((x)&MASK_ZERO_I F_OVER(113))\
+!!((x)&MASK_ZERO_IF_OVER(114))+!!((x)&MASK_ZERO_I F_OVER(115))\
+!!((x)&MASK_ZERO_IF_OVER(116))+!!((x)&MASK_ZERO_I F_OVER(117))\
+!!((x)&MASK_ZERO_IF_OVER(118))+!!((x)&MASK_ZERO_I F_OVER(119))\
+!!((x)&MASK_ZERO_IF_OVER(120))+!!((x)&MASK_ZERO_I F_OVER(121))\
+!!((x)&MASK_ZERO_IF_OVER(122))+!!((x)&MASK_ZERO_I F_OVER(123))\
+!!((x)&MASK_ZERO_IF_OVER(124))+!!((x)&MASK_ZERO_I F_OVER(125))\
+!!((x)&MASK_ZERO_IF_OVER(126))+!!((x)&MASK_ZERO_I F_OVER(127))\
+!!((x)&MASK_ZERO_IF_OVER(128))+!!((x)&MASK_ZERO_I F_OVER(129))\
+!!((x)&MASK_ZERO_IF_OVER(130))+!!((x)&MASK_ZERO_I F_OVER(131))\
+!!((x)&MASK_ZERO_IF_OVER(132))+!!((x)&MASK_ZERO_I F_OVER(133))\
+!!((x)&MASK_ZERO_IF_OVER(134))+!!((x)&MASK_ZERO_I F_OVER(135))\
+!!((x)&MASK_ZERO_IF_OVER(136))+!!((x)&MASK_ZERO_I F_OVER(137))\
+!!((x)&MASK_ZERO_IF_OVER(138))+!!((x)&MASK_ZERO_I F_OVER(139))\
+!!((x)&MASK_ZERO_IF_OVER(140))+!!((x)&MASK_ZERO_I F_OVER(141))\
+!!((x)&MASK_ZERO_IF_OVER(142))+!!((x)&MASK_ZERO_I F_OVER(143))\
+!!((x)&MASK_ZERO_IF_OVER(144))+!!((x)&MASK_ZERO_I F_OVER(145))\
+!!((x)&MASK_ZERO_IF_OVER(146))+!!((x)&MASK_ZERO_I F_OVER(147))\
+!!((x)&MASK_ZERO_IF_OVER(148))+!!((x)&MASK_ZERO_I F_OVER(149))\
+!!((x)&MASK_ZERO_IF_OVER(150))+!!((x)&MASK_ZERO_I F_OVER(151))\
+!!((x)&MASK_ZERO_IF_OVER(152))+!!((x)&MASK_ZERO_I F_OVER(153))\
+!!((x)&MASK_ZERO_IF_OVER(154))+!!((x)&MASK_ZERO_I F_OVER(155))\
+!!((x)&MASK_ZERO_IF_OVER(156))+!!((x)&MASK_ZERO_I F_OVER(157))\
+!!((x)&MASK_ZERO_IF_OVER(158))+!!((x)&MASK_ZERO_I F_OVER(159))\
+!!((x)&MASK_ZERO_IF_OVER(160))+!!((x)&MASK_ZERO_I F_OVER(161))\
+!!((x)&MASK_ZERO_IF_OVER(162))+!!((x)&MASK_ZERO_I F_OVER(163))\
+!!((x)&MASK_ZERO_IF_OVER(164))+!!((x)&MASK_ZERO_I F_OVER(165))\
+!!((x)&MASK_ZERO_IF_OVER(166))+!!((x)&MASK_ZERO_I F_OVER(167))\
+!!((x)&MASK_ZERO_IF_OVER(168))+!!((x)&MASK_ZERO_I F_OVER(169))\
+!!((x)&MASK_ZERO_IF_OVER(170))+!!((x)&MASK_ZERO_I F_OVER(171))\
+!!((x)&MASK_ZERO_IF_OVER(172))+!!((x)&MASK_ZERO_I F_OVER(173))\
+!!((x)&MASK_ZERO_IF_OVER(174))+!!((x)&MASK_ZERO_I F_OVER(175))\
+!!((x)&MASK_ZERO_IF_OVER(176))+!!((x)&MASK_ZERO_I F_OVER(177))\
+!!((x)&MASK_ZERO_IF_OVER(178))+!!((x)&MASK_ZERO_I F_OVER(179))\
+!!((x)&MASK_ZERO_IF_OVER(180))+!!((x)&MASK_ZERO_I F_OVER(181))\
+!!((x)&MASK_ZERO_IF_OVER(182))+!!((x)&MASK_ZERO_I F_OVER(183))\
+!!((x)&MASK_ZERO_IF_OVER(184))+!!((x)&MASK_ZERO_I F_OVER(185))\
+!!((x)&MASK_ZERO_IF_OVER(186))+!!((x)&MASK_ZERO_I F_OVER(187))\
+!!((x)&MASK_ZERO_IF_OVER(188))+!!((x)&MASK_ZERO_I F_OVER(189))\
+!!((x)&MASK_ZERO_IF_OVER(190))+!!((x)&MASK_ZERO_I F_OVER(191))\
+!!((x)&MASK_ZERO_IF_OVER(192))+!!((x)&MASK_ZERO_I F_OVER(193))\
+!!((x)&MASK_ZERO_IF_OVER(194))+!!((x)&MASK_ZERO_I F_OVER(195))\
+!!((x)&MASK_ZERO_IF_OVER(196))+!!((x)&MASK_ZERO_I F_OVER(197))\
+!!((x)&MASK_ZERO_IF_OVER(198))+!!((x)&MASK_ZERO_I F_OVER(199))\
+!!((x)&MASK_ZERO_IF_OVER(200))+!!((x)&MASK_ZERO_I F_OVER(201))\
+!!((x)&MASK_ZERO_IF_OVER(202))+!!((x)&MASK_ZERO_I F_OVER(203))\
+!!((x)&MASK_ZERO_IF_OVER(204))+!!((x)&MASK_ZERO_I F_OVER(205))\
+!!((x)&MASK_ZERO_IF_OVER(206))+!!((x)&MASK_ZERO_I F_OVER(207))\
+!!((x)&MASK_ZERO_IF_OVER(208))+!!((x)&MASK_ZERO_I F_OVER(209))\
+!!((x)&MASK_ZERO_IF_OVER(210))+!!((x)&MASK_ZERO_I F_OVER(211))\
+!!((x)&MASK_ZERO_IF_OVER(212))+!!((x)&MASK_ZERO_I F_OVER(213))\
+!!((x)&MASK_ZERO_IF_OVER(214))+!!((x)&MASK_ZERO_I F_OVER(215))\
+!!((x)&MASK_ZERO_IF_OVER(216))+!!((x)&MASK_ZERO_I F_OVER(217))\
+!!((x)&MASK_ZERO_IF_OVER(218))+!!((x)&MASK_ZERO_I F_OVER(219))\
+!!((x)&MASK_ZERO_IF_OVER(220))+!!((x)&MASK_ZERO_I F_OVER(221))\
+!!((x)&MASK_ZERO_IF_OVER(222))+!!((x)&MASK_ZERO_I F_OVER(223))\
+!!((x)&MASK_ZERO_IF_OVER(224))+!!((x)&MASK_ZERO_I F_OVER(225))\
+!!((x)&MASK_ZERO_IF_OVER(226))+!!((x)&MASK_ZERO_I F_OVER(227))\
+!!((x)&MASK_ZERO_IF_OVER(228))+!!((x)&MASK_ZERO_I F_OVER(229))\
+!!((x)&MASK_ZERO_IF_OVER(230))+!!((x)&MASK_ZERO_I F_OVER(231))\
+!!((x)&MASK_ZERO_IF_OVER(232))+!!((x)&MASK_ZERO_I F_OVER(233))\
+!!((x)&MASK_ZERO_IF_OVER(234))+!!((x)&MASK_ZERO_I F_OVER(235))\
+!!((x)&MASK_ZERO_IF_OVER(236))+!!((x)&MASK_ZERO_I F_OVER(237))\
+!!((x)&MASK_ZERO_IF_OVER(238))+!!((x)&MASK_ZERO_I F_OVER(239))\
+!!((x)&MASK_ZERO_IF_OVER(240))+!!((x)&MASK_ZERO_I F_OVER(241))\
+!!((x)&MASK_ZERO_IF_OVER(242))+!!((x)&MASK_ZERO_I F_OVER(243))\
+!!((x)&MASK_ZERO_IF_OVER(244))+!!((x)&MASK_ZERO_I F_OVER(245))\
+!!((x)&MASK_ZERO_IF_OVER(246))+!!((x)&MASK_ZERO_I F_OVER(247))\
+!!((x)&MASK_ZERO_IF_OVER(248))+!!((x)&MASK_ZERO_I F_OVER(249))\
+!!((x)&MASK_ZERO_IF_OVER(250))+!!((x)&MASK_ZERO_I F_OVER(251))\
+!!((x)&MASK_ZERO_IF_OVER(252))+!!((x)&MASK_ZERO_I F_OVER(253))\
+!!((x)&MASK_ZERO_IF_OVER(254))+!!((x)&MASK_ZERO_I F_OVER(255))\
+!!((x)&MASK_ZERO_IF_OVER(256))+!!((x)&MASK_ZERO_I F_OVER(257))\
+!!((x)&MASK_ZERO_IF_OVER(258))+!!((x)&MASK_ZERO_I F_OVER(259))\
+!!((x)&MASK_ZERO_IF_OVER(260))+!!((x)&MASK_ZERO_I F_OVER(261))\
+!!((x)&MASK_ZERO_IF_OVER(262))+!!((x)&MASK_ZERO_I F_OVER(263))\
+!!((x)&MASK_ZERO_IF_OVER(264))+!!((x)&MASK_ZERO_I F_OVER(265))\
+!!((x)&MASK_ZERO_IF_OVER(266))+!!((x)&MASK_ZERO_I F_OVER(267))\
+!!((x)&MASK_ZERO_IF_OVER(268))+!!((x)&MASK_ZERO_I F_OVER(269))\
+!!((x)&MASK_ZERO_IF_OVER(270))+!!((x)&MASK_ZERO_I F_OVER(271))\
+!!((x)&MASK_ZERO_IF_OVER(272))+!!((x)&MASK_ZERO_I F_OVER(273))\
+!!((x)&MASK_ZERO_IF_OVER(274))+!!((x)&MASK_ZERO_I F_OVER(275))\
+!!((x)&MASK_ZERO_IF_OVER(276))+!!((x)&MASK_ZERO_I F_OVER(277))\
+!!((x)&MASK_ZERO_IF_OVER(278))+!!((x)&MASK_ZERO_I F_OVER(279))\
+!!((x)&MASK_ZERO_IF_OVER(280))+!!((x)&MASK_ZERO_I F_OVER(281))\
+!!((x)&MASK_ZERO_IF_OVER(282))+!!((x)&MASK_ZERO_I F_OVER(283))\
+!!((x)&MASK_ZERO_IF_OVER(284))+!!((x)&MASK_ZERO_I F_OVER(285))\
+!!((x)&MASK_ZERO_IF_OVER(286))+!!((x)&MASK_ZERO_I F_OVER(287))\
+!!((x)&MASK_ZERO_IF_OVER(288))+!!((x)&MASK_ZERO_I F_OVER(289))\
+!!((x)&MASK_ZERO_IF_OVER(290))+!!((x)&MASK_ZERO_I F_OVER(291))\
+!!((x)&MASK_ZERO_IF_OVER(292))+!!((x)&MASK_ZERO_I F_OVER(293))\
+!!((x)&MASK_ZERO_IF_OVER(294))+!!((x)&MASK_ZERO_I F_OVER(295))\
+!!((x)&MASK_ZERO_IF_OVER(296))+!!((x)&MASK_ZERO_I F_OVER(297))\
+!!((x)&MASK_ZERO_IF_OVER(298))+!!((x)&MASK_ZERO_I F_OVER(299))\
+!!((x)&MASK_ZERO_IF_OVER(300))+!!((x)&MASK_ZERO_I F_OVER(301))\
+!!((x)&MASK_ZERO_IF_OVER(302))+!!((x)&MASK_ZERO_I F_OVER(303))\
+!!((x)&MASK_ZERO_IF_OVER(304))+!!((x)&MASK_ZERO_I F_OVER(305))\
+!!((x)&MASK_ZERO_IF_OVER(306))+!!((x)&MASK_ZERO_I F_OVER(307))\
+!!((x)&MASK_ZERO_IF_OVER(308))+!!((x)&MASK_ZERO_I F_OVER(309))\
+!!((x)&MASK_ZERO_IF_OVER(310))+!!((x)&MASK_ZERO_I F_OVER(311))\
+!!((x)&MASK_ZERO_IF_OVER(312))+!!((x)&MASK_ZERO_I F_OVER(313))\
+!!((x)&MASK_ZERO_IF_OVER(314))+!!((x)&MASK_ZERO_I F_OVER(315))\
+!!((x)&MASK_ZERO_IF_OVER(316))+!!((x)&MASK_ZERO_I F_OVER(317))\
+!!((x)&MASK_ZERO_IF_OVER(318))+!!((x)&MASK_ZERO_I F_OVER(319))\
+!!((x)&MASK_ZERO_IF_OVER(320))+!!((x)&MASK_ZERO_I F_OVER(321))\
+!!((x)&MASK_ZERO_IF_OVER(322))+!!((x)&MASK_ZERO_I F_OVER(323))\
+!!((x)&MASK_ZERO_IF_OVER(324))+!!((x)&MASK_ZERO_I F_OVER(325))\
+!!((x)&MASK_ZERO_IF_OVER(326))+!!((x)&MASK_ZERO_I F_OVER(327))\
+!!((x)&MASK_ZERO_IF_OVER(328))+!!((x)&MASK_ZERO_I F_OVER(329))\
+!!((x)&MASK_ZERO_IF_OVER(330))+!!((x)&MASK_ZERO_I F_OVER(331))\
+!!((x)&MASK_ZERO_IF_OVER(332))+!!((x)&MASK_ZERO_I F_OVER(333))\
+!!((x)&MASK_ZERO_IF_OVER(334))+!!((x)&MASK_ZERO_I F_OVER(335))\
+!!((x)&MASK_ZERO_IF_OVER(336))+!!((x)&MASK_ZERO_I F_OVER(337))\
+!!((x)&MASK_ZERO_IF_OVER(338))+!!((x)&MASK_ZERO_I F_OVER(339))\
+!!((x)&MASK_ZERO_IF_OVER(340))+!!((x)&MASK_ZERO_I F_OVER(341))\
+!!((x)&MASK_ZERO_IF_OVER(342))+!!((x)&MASK_ZERO_I F_OVER(343))\
+!!((x)&MASK_ZERO_IF_OVER(344))+!!((x)&MASK_ZERO_I F_OVER(345))\
+!!((x)&MASK_ZERO_IF_OVER(346))+!!((x)&MASK_ZERO_I F_OVER(347))\
+!!((x)&MASK_ZERO_IF_OVER(348))+!!((x)&MASK_ZERO_I F_OVER(349))\
+!!((x)&MASK_ZERO_IF_OVER(350))+!!((x)&MASK_ZERO_I F_OVER(351))\
+!!((x)&MASK_ZERO_IF_OVER(352))+!!((x)&MASK_ZERO_I F_OVER(353))\
+!!((x)&MASK_ZERO_IF_OVER(354))+!!((x)&MASK_ZERO_I F_OVER(355))\
+!!((x)&MASK_ZERO_IF_OVER(356))+!!((x)&MASK_ZERO_I F_OVER(357))\
+!!((x)&MASK_ZERO_IF_OVER(358))+!!((x)&MASK_ZERO_I F_OVER(359))\
+!!((x)&MASK_ZERO_IF_OVER(360))+!!((x)&MASK_ZERO_I F_OVER(361))\
+!!((x)&MASK_ZERO_IF_OVER(362))+!!((x)&MASK_ZERO_I F_OVER(363))\
+!!((x)&MASK_ZERO_IF_OVER(364))+!!((x)&MASK_ZERO_I F_OVER(365))\
+!!((x)&MASK_ZERO_IF_OVER(366))+!!((x)&MASK_ZERO_I F_OVER(367))\
+!!((x)&MASK_ZERO_IF_OVER(368))+!!((x)&MASK_ZERO_I F_OVER(369))\
+!!((x)&MASK_ZERO_IF_OVER(370))+!!((x)&MASK_ZERO_I F_OVER(371))\
+!!((x)&MASK_ZERO_IF_OVER(372))+!!((x)&MASK_ZERO_I F_OVER(373))\
+!!((x)&MASK_ZERO_IF_OVER(374))+!!((x)&MASK_ZERO_I F_OVER(375))\
+!!((x)&MASK_ZERO_IF_OVER(376))+!!((x)&MASK_ZERO_I F_OVER(377))\
+!!((x)&MASK_ZERO_IF_OVER(378))+!!((x)&MASK_ZERO_I F_OVER(379))\
+!!((x)&MASK_ZERO_IF_OVER(380))+!!((x)&MASK_ZERO_I F_OVER(381))\
+!!((x)&MASK_ZERO_IF_OVER(382))+!!((x)&MASK_ZERO_I F_OVER(383))\
+!!((x)&MASK_ZERO_IF_OVER(384))+!!((x)&MASK_ZERO_I F_OVER(385))\
+!!((x)&MASK_ZERO_IF_OVER(386))+!!((x)&MASK_ZERO_I F_OVER(387))\
+!!((x)&MASK_ZERO_IF_OVER(388))+!!((x)&MASK_ZERO_I F_OVER(389))\
+!!((x)&MASK_ZERO_IF_OVER(390))+!!((x)&MASK_ZERO_I F_OVER(391))\
+!!((x)&MASK_ZERO_IF_OVER(392))+!!((x)&MASK_ZERO_I F_OVER(393))\
+!!((x)&MASK_ZERO_IF_OVER(394))+!!((x)&MASK_ZERO_I F_OVER(395))\
+!!((x)&MASK_ZERO_IF_OVER(396))+!!((x)&MASK_ZERO_I F_OVER(397))\
+!!((x)&MASK_ZERO_IF_OVER(398))+!!((x)&MASK_ZERO_I F_OVER(399))\
+!!((x)&MASK_ZERO_IF_OVER(400))+!!((x)&MASK_ZERO_I F_OVER(401))\
+!!((x)&MASK_ZERO_IF_OVER(402))+!!((x)&MASK_ZERO_I F_OVER(403))\
+!!((x)&MASK_ZERO_IF_OVER(404))+!!((x)&MASK_ZERO_I F_OVER(405))\
+!!((x)&MASK_ZERO_IF_OVER(406))+!!((x)&MASK_ZERO_I F_OVER(407))\
+!!((x)&MASK_ZERO_IF_OVER(408))+!!((x)&MASK_ZERO_I F_OVER(409))\
+!!((x)&MASK_ZERO_IF_OVER(410))+!!((x)&MASK_ZERO_I F_OVER(411))\
+!!((x)&MASK_ZERO_IF_OVER(412))+!!((x)&MASK_ZERO_I F_OVER(413))\
+!!((x)&MASK_ZERO_IF_OVER(414))+!!((x)&MASK_ZERO_I F_OVER(415))\
+!!((x)&MASK_ZERO_IF_OVER(416))+!!((x)&MASK_ZERO_I F_OVER(417))\
+!!((x)&MASK_ZERO_IF_OVER(418))+!!((x)&MASK_ZERO_I F_OVER(419))\
+!!((x)&MASK_ZERO_IF_OVER(420))+!!((x)&MASK_ZERO_I F_OVER(421))\
+!!((x)&MASK_ZERO_IF_OVER(422))+!!((x)&MASK_ZERO_I F_OVER(423))\
+!!((x)&MASK_ZERO_IF_OVER(424))+!!((x)&MASK_ZERO_I F_OVER(425))\
+!!((x)&MASK_ZERO_IF_OVER(426))+!!((x)&MASK_ZERO_I F_OVER(427))\
+!!((x)&MASK_ZERO_IF_OVER(428))+!!((x)&MASK_ZERO_I F_OVER(429))\
+!!((x)&MASK_ZERO_IF_OVER(430))+!!((x)&MASK_ZERO_I F_OVER(431))\
+!!((x)&MASK_ZERO_IF_OVER(432))+!!((x)&MASK_ZERO_I F_OVER(433))\
+!!((x)&MASK_ZERO_IF_OVER(434))+!!((x)&MASK_ZERO_I F_OVER(435))\
+!!((x)&MASK_ZERO_IF_OVER(436))+!!((x)&MASK_ZERO_I F_OVER(437))\
+!!((x)&MASK_ZERO_IF_OVER(438))+!!((x)&MASK_ZERO_I F_OVER(439))\
+!!((x)&MASK_ZERO_IF_OVER(440))+!!((x)&MASK_ZERO_I F_OVER(441))\
+!!((x)&MASK_ZERO_IF_OVER(442))+!!((x)&MASK_ZERO_I F_OVER(443))\
+!!((x)&MASK_ZERO_IF_OVER(444))+!!((x)&MASK_ZERO_I F_OVER(445))\
+!!((x)&MASK_ZERO_IF_OVER(446))+!!((x)&MASK_ZERO_I F_OVER(447))\
+!!((x)&MASK_ZERO_IF_OVER(448))+!!((x)&MASK_ZERO_I F_OVER(449))\
+!!((x)&MASK_ZERO_IF_OVER(450))+!!((x)&MASK_ZERO_I F_OVER(451))\
+!!((x)&MASK_ZERO_IF_OVER(452))+!!((x)&MASK_ZERO_I F_OVER(453))\
+!!((x)&MASK_ZERO_IF_OVER(454))+!!((x)&MASK_ZERO_I F_OVER(455))\
+!!((x)&MASK_ZERO_IF_OVER(456))+!!((x)&MASK_ZERO_I F_OVER(457))\
+!!((x)&MASK_ZERO_IF_OVER(458))+!!((x)&MASK_ZERO_I F_OVER(459))\
+!!((x)&MASK_ZERO_IF_OVER(460))+!!((x)&MASK_ZERO_I F_OVER(461))\
+!!((x)&MASK_ZERO_IF_OVER(462))+!!((x)&MASK_ZERO_I F_OVER(463))\
+!!((x)&MASK_ZERO_IF_OVER(464))+!!((x)&MASK_ZERO_I F_OVER(465))\
+!!((x)&MASK_ZERO_IF_OVER(466))+!!((x)&MASK_ZERO_I F_OVER(467))\
+!!((x)&MASK_ZERO_IF_OVER(468))+!!((x)&MASK_ZERO_I F_OVER(469))\
+!!((x)&MASK_ZERO_IF_OVER(470))+!!((x)&MASK_ZERO_I F_OVER(471))\
+!!((x)&MASK_ZERO_IF_OVER(472))+!!((x)&MASK_ZERO_I F_OVER(473))\
+!!((x)&MASK_ZERO_IF_OVER(474))+!!((x)&MASK_ZERO_I F_OVER(475))\
+!!((x)&MASK_ZERO_IF_OVER(476))+!!((x)&MASK_ZERO_I F_OVER(477))\
+!!((x)&MASK_ZERO_IF_OVER(478))+!!((x)&MASK_ZERO_I F_OVER(479))\
+!!((x)&MASK_ZERO_IF_OVER(480))+!!((x)&MASK_ZERO_I F_OVER(481))\
+!!((x)&MASK_ZERO_IF_OVER(482))+!!((x)&MASK_ZERO_I F_OVER(483))\
+!!((x)&MASK_ZERO_IF_OVER(484))+!!((x)&MASK_ZERO_I F_OVER(485))\
+!!((x)&MASK_ZERO_IF_OVER(486))+!!((x)&MASK_ZERO_I F_OVER(487))\
+!!((x)&MASK_ZERO_IF_OVER(488))+!!((x)&MASK_ZERO_I F_OVER(489))\
+!!((x)&MASK_ZERO_IF_OVER(490))+!!((x)&MASK_ZERO_I F_OVER(491))\
+!!((x)&MASK_ZERO_IF_OVER(492))+!!((x)&MASK_ZERO_I F_OVER(493))\
+!!((x)&MASK_ZERO_IF_OVER(494))+!!((x)&MASK_ZERO_I F_OVER(495))\
+!!((x)&MASK_ZERO_IF_OVER(496))+!!((x)&MASK_ZERO_I F_OVER(497))\
+!!((x)&MASK_ZERO_IF_OVER(498))+!!((x)&MASK_ZERO_I F_OVER(499))\
+!!((x)&MASK_ZERO_IF_OVER(500))+!!((x)&MASK_ZERO_I F_OVER(501))\
+!!((x)&MASK_ZERO_IF_OVER(502))+!!((x)&MASK_ZERO_I F_OVER(503))\
+!!((x)&MASK_ZERO_IF_OVER(504))+!!((x)&MASK_ZERO_I F_OVER(505))\
+!!((x)&MASK_ZERO_IF_OVER(506))+!!((x)&MASK_ZERO_I F_OVER(507))\
+!!((x)&MASK_ZERO_IF_OVER(508))+!!((x)&MASK_ZERO_I F_OVER(509))\
+!!((x)&MASK_ZERO_IF_OVER(510))+!!((x)&MASK_ZERO_I F_OVER(511))\
+!!((x)&MASK_ZERO_IF_OVER(512))+!!((x)&MASK_ZERO_I F_OVER(513))\
+!!((x)&MASK_ZERO_IF_OVER(514))+!!((x)&MASK_ZERO_I F_OVER(515))\
+!!((x)&MASK_ZERO_IF_OVER(516))+!!((x)&MASK_ZERO_I F_OVER(517))\
+!!((x)&MASK_ZERO_IF_OVER(518))+!!((x)&MASK_ZERO_I F_OVER(519))\
+!!((x)&MASK_ZERO_IF_OVER(520))+!!((x)&MASK_ZERO_I F_OVER(521))\
+!!((x)&MASK_ZERO_IF_OVER(522))+!!((x)&MASK_ZERO_I F_OVER(523))\
+!!((x)&MASK_ZERO_IF_OVER(524))+!!((x)&MASK_ZERO_I F_OVER(525))\
+!!((x)&MASK_ZERO_IF_OVER(526))+!!((x)&MASK_ZERO_I F_OVER(527))\
+!!((x)&MASK_ZERO_IF_OVER(528))+!!((x)&MASK_ZERO_I F_OVER(529))\
+!!((x)&MASK_ZERO_IF_OVER(530))+!!((x)&MASK_ZERO_I F_OVER(531))\
+!!((x)&MASK_ZERO_IF_OVER(532))+!!((x)&MASK_ZERO_I F_OVER(533))\
+!!((x)&MASK_ZERO_IF_OVER(534))+!!((x)&MASK_ZERO_I F_OVER(535))\
+!!((x)&MASK_ZERO_IF_OVER(536))+!!((x)&MASK_ZERO_I F_OVER(537))\
+!!((x)&MASK_ZERO_IF_OVER(538))+!!((x)&MASK_ZERO_I F_OVER(539))\
+!!((x)&MASK_ZERO_IF_OVER(540))+!!((x)&MASK_ZERO_I F_OVER(541))\
+!!((x)&MASK_ZERO_IF_OVER(542))+!!((x)&MASK_ZERO_I F_OVER(543))\
+!!((x)&MASK_ZERO_IF_OVER(544))+!!((x)&MASK_ZERO_I F_OVER(545))\
+!!((x)&MASK_ZERO_IF_OVER(546))+!!((x)&MASK_ZERO_I F_OVER(547))\
+!!((x)&MASK_ZERO_IF_OVER(548))+!!((x)&MASK_ZERO_I F_OVER(549))\
+!!((x)&MASK_ZERO_IF_OVER(550))+!!((x)&MASK_ZERO_I F_OVER(551))\
+!!((x)&MASK_ZERO_IF_OVER(552))+!!((x)&MASK_ZERO_I F_OVER(553))\
+!!((x)&MASK_ZERO_IF_OVER(554))+!!((x)&MASK_ZERO_I F_OVER(555))\
+!!((x)&MASK_ZERO_IF_OVER(556))+!!((x)&MASK_ZERO_I F_OVER(557))\
+!!((x)&MASK_ZERO_IF_OVER(558))+!!((x)&MASK_ZERO_I F_OVER(559))\
+!!((x)&MASK_ZERO_IF_OVER(560))+!!((x)&MASK_ZERO_I F_OVER(561))\
+!!((x)&MASK_ZERO_IF_OVER(562))+!!((x)&MASK_ZERO_I F_OVER(563))\
+!!((x)&MASK_ZERO_IF_OVER(564))+!!((x)&MASK_ZERO_I F_OVER(565))\
+!!((x)&MASK_ZERO_IF_OVER(566))+!!((x)&MASK_ZERO_I F_OVER(567))\
+!!((x)&MASK_ZERO_IF_OVER(568))+!!((x)&MASK_ZERO_I F_OVER(569))\
+!!((x)&MASK_ZERO_IF_OVER(570))+!!((x)&MASK_ZERO_I F_OVER(571))\
+!!((x)&MASK_ZERO_IF_OVER(572))+!!((x)&MASK_ZERO_I F_OVER(573))\
+!!((x)&MASK_ZERO_IF_OVER(574))+!!((x)&MASK_ZERO_I F_OVER(575))\
+!!((x)&MASK_ZERO_IF_OVER(576))+!!((x)&MASK_ZERO_I F_OVER(577))\
+!!((x)&MASK_ZERO_IF_OVER(578))+!!((x)&MASK_ZERO_I F_OVER(579))\
+!!((x)&MASK_ZERO_IF_OVER(580))+!!((x)&MASK_ZERO_I F_OVER(581))\
+!!((x)&MASK_ZERO_IF_OVER(582))+!!((x)&MASK_ZERO_I F_OVER(583))\
+!!((x)&MASK_ZERO_IF_OVER(584))+!!((x)&MASK_ZERO_I F_OVER(585))\
+!!((x)&MASK_ZERO_IF_OVER(586))+!!((x)&MASK_ZERO_I F_OVER(587))\
+!!((x)&MASK_ZERO_IF_OVER(588))+!!((x)&MASK_ZERO_I F_OVER(589))\
+!!((x)&MASK_ZERO_IF_OVER(590))+!!((x)&MASK_ZERO_I F_OVER(591))\
+!!((x)&MASK_ZERO_IF_OVER(592))+!!((x)&MASK_ZERO_I F_OVER(593))\
+!!((x)&MASK_ZERO_IF_OVER(594))+!!((x)&MASK_ZERO_I F_OVER(595))\
+!!((x)&MASK_ZERO_IF_OVER(596))+!!((x)&MASK_ZERO_I F_OVER(597))\
+!!((x)&MASK_ZERO_IF_OVER(598))+!!((x)&MASK_ZERO_I F_OVER(599))\
+!!((x)&MASK_ZERO_IF_OVER(600))+!!((x)&MASK_ZERO_I F_OVER(601))\
+!!((x)&MASK_ZERO_IF_OVER(602))+!!((x)&MASK_ZERO_I F_OVER(603))\
+!!((x)&MASK_ZERO_IF_OVER(604))+!!((x)&MASK_ZERO_I F_OVER(605))\
+!!((x)&MASK_ZERO_IF_OVER(606))+!!((x)&MASK_ZERO_I F_OVER(607))\
+!!((x)&MASK_ZERO_IF_OVER(608))+!!((x)&MASK_ZERO_I F_OVER(609))\
+!!((x)&MASK_ZERO_IF_OVER(610))+!!((x)&MASK_ZERO_I F_OVER(611))\
+!!((x)&MASK_ZERO_IF_OVER(612))+!!((x)&MASK_ZERO_I F_OVER(613))\
+!!((x)&MASK_ZERO_IF_OVER(614))+!!((x)&MASK_ZERO_I F_OVER(615))\
+!!((x)&MASK_ZERO_IF_OVER(616))+!!((x)&MASK_ZERO_I F_OVER(617))\
+!!((x)&MASK_ZERO_IF_OVER(618))+!!((x)&MASK_ZERO_I F_OVER(619))\
+!!((x)&MASK_ZERO_IF_OVER(620))+!!((x)&MASK_ZERO_I F_OVER(621))\
+!!((x)&MASK_ZERO_IF_OVER(622))+!!((x)&MASK_ZERO_I F_OVER(623))\
+!!((x)&MASK_ZERO_IF_OVER(624))+!!((x)&MASK_ZERO_I F_OVER(625))\
+!!((x)&MASK_ZERO_IF_OVER(626))+!!((x)&MASK_ZERO_I F_OVER(627))\
+!!((x)&MASK_ZERO_IF_OVER(628))+!!((x)&MASK_ZERO_I F_OVER(629))\
+!!((x)&MASK_ZERO_IF_OVER(630))+!!((x)&MASK_ZERO_I F_OVER(631))\
+!!((x)&MASK_ZERO_IF_OVER(632))+!!((x)&MASK_ZERO_I F_OVER(633))\
+!!((x)&MASK_ZERO_IF_OVER(634))+!!((x)&MASK_ZERO_I F_OVER(635))\
+!!((x)&MASK_ZERO_IF_OVER(636))+!!((x)&MASK_ZERO_I F_OVER(637))\
+!!((x)&MASK_ZERO_IF_OVER(638))+!!((x)&MASK_ZERO_I F_OVER(639))\
+!!((x)&MASK_ZERO_IF_OVER(640))+!!((x)&MASK_ZERO_I F_OVER(641))\
+!!((x)&MASK_ZERO_IF_OVER(642))+!!((x)&MASK_ZERO_I F_OVER(643))\
+!!((x)&MASK_ZERO_IF_OVER(644))+!!((x)&MASK_ZERO_I F_OVER(645))\
+!!((x)&MASK_ZERO_IF_OVER(646))+!!((x)&MASK_ZERO_I F_OVER(647))\
+!!((x)&MASK_ZERO_IF_OVER(648))+!!((x)&MASK_ZERO_I F_OVER(649))\
+!!((x)&MASK_ZERO_IF_OVER(650))+!!((x)&MASK_ZERO_I F_OVER(651))\
+!!((x)&MASK_ZERO_IF_OVER(652))+!!((x)&MASK_ZERO_I F_OVER(653))\
+!!((x)&MASK_ZERO_IF_OVER(654))+!!((x)&MASK_ZERO_I F_OVER(655))\
+!!((x)&MASK_ZERO_IF_OVER(656))+!!((x)&MASK_ZERO_I F_OVER(657))\
+!!((x)&MASK_ZERO_IF_OVER(658))+!!((x)&MASK_ZERO_I F_OVER(659))\
+!!((x)&MASK_ZERO_IF_OVER(660))+!!((x)&MASK_ZERO_I F_OVER(661))\
+!!((x)&MASK_ZERO_IF_OVER(662))+!!((x)&MASK_ZERO_I F_OVER(663))\
+!!((x)&MASK_ZERO_IF_OVER(664))+!!((x)&MASK_ZERO_I F_OVER(665))\
+!!((x)&MASK_ZERO_IF_OVER(666))+!!((x)&MASK_ZERO_I F_OVER(667))\
+!!((x)&MASK_ZERO_IF_OVER(668))+!!((x)&MASK_ZERO_I F_OVER(669))\
+!!((x)&MASK_ZERO_IF_OVER(670))+!!((x)&MASK_ZERO_I F_OVER(671))\
+!!((x)&MASK_ZERO_IF_OVER(672))+!!((x)&MASK_ZERO_I F_OVER(673))\
+!!((x)&MASK_ZERO_IF_OVER(674))+!!((x)&MASK_ZERO_I F_OVER(675))\
+!!((x)&MASK_ZERO_IF_OVER(676))+!!((x)&MASK_ZERO_I F_OVER(677))\
+!!((x)&MASK_ZERO_IF_OVER(678))+!!((x)&MASK_ZERO_I F_OVER(679))\
+!!((x)&MASK_ZERO_IF_OVER(680))+!!((x)&MASK_ZERO_I F_OVER(681))\
+!!((x)&MASK_ZERO_IF_OVER(682))+!!((x)&MASK_ZERO_I F_OVER(683))\
+!!((x)&MASK_ZERO_IF_OVER(684))+!!((x)&MASK_ZERO_I F_OVER(685))\
+!!((x)&MASK_ZERO_IF_OVER(686))+!!((x)&MASK_ZERO_I F_OVER(687))\
+!!((x)&MASK_ZERO_IF_OVER(688))+!!((x)&MASK_ZERO_I F_OVER(689))\
+!!((x)&MASK_ZERO_IF_OVER(690))+!!((x)&MASK_ZERO_I F_OVER(691))\
+!!((x)&MASK_ZERO_IF_OVER(692))+!!((x)&MASK_ZERO_I F_OVER(693))\
+!!((x)&MASK_ZERO_IF_OVER(694))+!!((x)&MASK_ZERO_I F_OVER(695))\
+!!((x)&MASK_ZERO_IF_OVER(696))+!!((x)&MASK_ZERO_I F_OVER(697))\
+!!((x)&MASK_ZERO_IF_OVER(698))+!!((x)&MASK_ZERO_I F_OVER(699))\
+!!((x)&MASK_ZERO_IF_OVER(700))+!!((x)&MASK_ZERO_I F_OVER(701))\
+!!((x)&MASK_ZERO_IF_OVER(702))+!!((x)&MASK_ZERO_I F_OVER(703))\
+!!((x)&MASK_ZERO_IF_OVER(704))+!!((x)&MASK_ZERO_I F_OVER(705))\
+!!((x)&MASK_ZERO_IF_OVER(706))+!!((x)&MASK_ZERO_I F_OVER(707))\
+!!((x)&MASK_ZERO_IF_OVER(708))+!!((x)&MASK_ZERO_I F_OVER(709))\
+!!((x)&MASK_ZERO_IF_OVER(710))+!!((x)&MASK_ZERO_I F_OVER(711))\
+!!((x)&MASK_ZERO_IF_OVER(712))+!!((x)&MASK_ZERO_I F_OVER(713))\
+!!((x)&MASK_ZERO_IF_OVER(714))+!!((x)&MASK_ZERO_I F_OVER(715))\
+!!((x)&MASK_ZERO_IF_OVER(716))+!!((x)&MASK_ZERO_I F_OVER(717))\
+!!((x)&MASK_ZERO_IF_OVER(718))+!!((x)&MASK_ZERO_I F_OVER(719))\
+!!((x)&MASK_ZERO_IF_OVER(720))+!!((x)&MASK_ZERO_I F_OVER(721))\
+!!((x)&MASK_ZERO_IF_OVER(722))+!!((x)&MASK_ZERO_I F_OVER(723))\
+!!((x)&MASK_ZERO_IF_OVER(724))+!!((x)&MASK_ZERO_I F_OVER(725))\
+!!((x)&MASK_ZERO_IF_OVER(726))+!!((x)&MASK_ZERO_I F_OVER(727))\
+!!((x)&MASK_ZERO_IF_OVER(728))+!!((x)&MASK_ZERO_I F_OVER(729))\
+!!((x)&MASK_ZERO_IF_OVER(730))+!!((x)&MASK_ZERO_I F_OVER(731))\
+!!((x)&MASK_ZERO_IF_OVER(732))+!!((x)&MASK_ZERO_I F_OVER(733))\
+!!((x)&MASK_ZERO_IF_OVER(734))+!!((x)&MASK_ZERO_I F_OVER(735))\
+!!((x)&MASK_ZERO_IF_OVER(736))+!!((x)&MASK_ZERO_I F_OVER(737))\
+!!((x)&MASK_ZERO_IF_OVER(738))+!!((x)&MASK_ZERO_I F_OVER(739))\
+!!((x)&MASK_ZERO_IF_OVER(740))+!!((x)&MASK_ZERO_I F_OVER(741))\
+!!((x)&MASK_ZERO_IF_OVER(742))+!!((x)&MASK_ZERO_I F_OVER(743))\
+!!((x)&MASK_ZERO_IF_OVER(744))+!!((x)&MASK_ZERO_I F_OVER(745))\
+!!((x)&MASK_ZERO_IF_OVER(746))+!!((x)&MASK_ZERO_I F_OVER(747))\
+!!((x)&MASK_ZERO_IF_OVER(748))+!!((x)&MASK_ZERO_I F_OVER(749))\
+!!((x)&MASK_ZERO_IF_OVER(750))+!!((x)&MASK_ZERO_I F_OVER(751))\
+!!((x)&MASK_ZERO_IF_OVER(752))+!!((x)&MASK_ZERO_I F_OVER(753))\
+!!((x)&MASK_ZERO_IF_OVER(754))+!!((x)&MASK_ZERO_I F_OVER(755))\
+!!((x)&MASK_ZERO_IF_OVER(756))+!!((x)&MASK_ZERO_I F_OVER(757))\
+!!((x)&MASK_ZERO_IF_OVER(758))+!!((x)&MASK_ZERO_I F_OVER(759))\
+!!((x)&MASK_ZERO_IF_OVER(760))+!!((x)&MASK_ZERO_I F_OVER(761))\
+!!((x)&MASK_ZERO_IF_OVER(762))+!!((x)&MASK_ZERO_I F_OVER(763))\
+!!((x)&MASK_ZERO_IF_OVER(764))+!!((x)&MASK_ZERO_I F_OVER(765))\
+!!((x)&MASK_ZERO_IF_OVER(766))+!!((x)&MASK_ZERO_I F_OVER(767))\
+!!((x)&MASK_ZERO_IF_OVER(768))+!!((x)&MASK_ZERO_I F_OVER(769))\
+!!((x)&MASK_ZERO_IF_OVER(770))+!!((x)&MASK_ZERO_I F_OVER(771))\
+!!((x)&MASK_ZERO_IF_OVER(772))+!!((x)&MASK_ZERO_I F_OVER(773))\
+!!((x)&MASK_ZERO_IF_OVER(774))+!!((x)&MASK_ZERO_I F_OVER(775))\
+!!((x)&MASK_ZERO_IF_OVER(776))+!!((x)&MASK_ZERO_I F_OVER(777))\
+!!((x)&MASK_ZERO_IF_OVER(778))+!!((x)&MASK_ZERO_I F_OVER(779))\
+!!((x)&MASK_ZERO_IF_OVER(780))+!!((x)&MASK_ZERO_I F_OVER(781))\
+!!((x)&MASK_ZERO_IF_OVER(782))+!!((x)&MASK_ZERO_I F_OVER(783))\
+!!((x)&MASK_ZERO_IF_OVER(784))+!!((x)&MASK_ZERO_I F_OVER(785))\
+!!((x)&MASK_ZERO_IF_OVER(786))+!!((x)&MASK_ZERO_I F_OVER(787))\
+!!((x)&MASK_ZERO_IF_OVER(788))+!!((x)&MASK_ZERO_I F_OVER(789))\
+!!((x)&MASK_ZERO_IF_OVER(790))+!!((x)&MASK_ZERO_I F_OVER(791))\
+!!((x)&MASK_ZERO_IF_OVER(792))+!!((x)&MASK_ZERO_I F_OVER(793))\
+!!((x)&MASK_ZERO_IF_OVER(794))+!!((x)&MASK_ZERO_I F_OVER(795))\
+!!((x)&MASK_ZERO_IF_OVER(796))+!!((x)&MASK_ZERO_I F_OVER(797))\
+!!((x)&MASK_ZERO_IF_OVER(798))+!!((x)&MASK_ZERO_I F_OVER(799))\
+!!((x)&MASK_ZERO_IF_OVER(800))+!!((x)&MASK_ZERO_I F_OVER(801))\
+!!((x)&MASK_ZERO_IF_OVER(802))+!!((x)&MASK_ZERO_I F_OVER(803))\
+!!((x)&MASK_ZERO_IF_OVER(804))+!!((x)&MASK_ZERO_I F_OVER(805))\
+!!((x)&MASK_ZERO_IF_OVER(806))+!!((x)&MASK_ZERO_I F_OVER(807))\
+!!((x)&MASK_ZERO_IF_OVER(808))+!!((x)&MASK_ZERO_I F_OVER(809))\
+!!((x)&MASK_ZERO_IF_OVER(810))+!!((x)&MASK_ZERO_I F_OVER(811))\
+!!((x)&MASK_ZERO_IF_OVER(812))+!!((x)&MASK_ZERO_I F_OVER(813))\
+!!((x)&MASK_ZERO_IF_OVER(814))+!!((x)&MASK_ZERO_I F_OVER(815))\
+!!((x)&MASK_ZERO_IF_OVER(816))+!!((x)&MASK_ZERO_I F_OVER(817))\
+!!((x)&MASK_ZERO_IF_OVER(818))+!!((x)&MASK_ZERO_I F_OVER(819))\
+!!((x)&MASK_ZERO_IF_OVER(820))+!!((x)&MASK_ZERO_I F_OVER(821))\
+!!((x)&MASK_ZERO_IF_OVER(822))+!!((x)&MASK_ZERO_I F_OVER(823))\
+!!((x)&MASK_ZERO_IF_OVER(824))+!!((x)&MASK_ZERO_I F_OVER(825))\
+!!((x)&MASK_ZERO_IF_OVER(826))+!!((x)&MASK_ZERO_I F_OVER(827))\
+!!((x)&MASK_ZERO_IF_OVER(828))+!!((x)&MASK_ZERO_I F_OVER(829))\
+!!((x)&MASK_ZERO_IF_OVER(830))+!!((x)&MASK_ZERO_I F_OVER(831))\
+!!((x)&MASK_ZERO_IF_OVER(832))+!!((x)&MASK_ZERO_I F_OVER(833))\
+!!((x)&MASK_ZERO_IF_OVER(834))+!!((x)&MASK_ZERO_I F_OVER(835))\
+!!((x)&MASK_ZERO_IF_OVER(836))+!!((x)&MASK_ZERO_I F_OVER(837))\
+!!((x)&MASK_ZERO_IF_OVER(838))+!!((x)&MASK_ZERO_I F_OVER(839))\
+!!((x)&MASK_ZERO_IF_OVER(840))+!!((x)&MASK_ZERO_I F_OVER(841))\
+!!((x)&MASK_ZERO_IF_OVER(842))+!!((x)&MASK_ZERO_I F_OVER(843))\
+!!((x)&MASK_ZERO_IF_OVER(844))+!!((x)&MASK_ZERO_I F_OVER(845))\
+!!((x)&MASK_ZERO_IF_OVER(846))+!!((x)&MASK_ZERO_I F_OVER(847))\
+!!((x)&MASK_ZERO_IF_OVER(848))+!!((x)&MASK_ZERO_I F_OVER(849))\
+!!((x)&MASK_ZERO_IF_OVER(850))+!!((x)&MASK_ZERO_I F_OVER(851))\
+!!((x)&MASK_ZERO_IF_OVER(852))+!!((x)&MASK_ZERO_I F_OVER(853))\
+!!((x)&MASK_ZERO_IF_OVER(854))+!!((x)&MASK_ZERO_I F_OVER(855))\
+!!((x)&MASK_ZERO_IF_OVER(856))+!!((x)&MASK_ZERO_I F_OVER(857))\
+!!((x)&MASK_ZERO_IF_OVER(858))+!!((x)&MASK_ZERO_I F_OVER(859))\
+!!((x)&MASK_ZERO_IF_OVER(860))+!!((x)&MASK_ZERO_I F_OVER(861))\
+!!((x)&MASK_ZERO_IF_OVER(862))+!!((x)&MASK_ZERO_I F_OVER(863))\
+!!((x)&MASK_ZERO_IF_OVER(864))+!!((x)&MASK_ZERO_I F_OVER(865))\
+!!((x)&MASK_ZERO_IF_OVER(866))+!!((x)&MASK_ZERO_I F_OVER(867))\
+!!((x)&MASK_ZERO_IF_OVER(868))+!!((x)&MASK_ZERO_I F_OVER(869))\
+!!((x)&MASK_ZERO_IF_OVER(870))+!!((x)&MASK_ZERO_I F_OVER(871))\
+!!((x)&MASK_ZERO_IF_OVER(872))+!!((x)&MASK_ZERO_I F_OVER(873))\
+!!((x)&MASK_ZERO_IF_OVER(874))+!!((x)&MASK_ZERO_I F_OVER(875))\
+!!((x)&MASK_ZERO_IF_OVER(876))+!!((x)&MASK_ZERO_I F_OVER(877))\
+!!((x)&MASK_ZERO_IF_OVER(878))+!!((x)&MASK_ZERO_I F_OVER(879))\
+!!((x)&MASK_ZERO_IF_OVER(880))+!!((x)&MASK_ZERO_I F_OVER(881))\
+!!((x)&MASK_ZERO_IF_OVER(882))+!!((x)&MASK_ZERO_I F_OVER(883))\
+!!((x)&MASK_ZERO_IF_OVER(884))+!!((x)&MASK_ZERO_I F_OVER(885))\
+!!((x)&MASK_ZERO_IF_OVER(886))+!!((x)&MASK_ZERO_I F_OVER(887))\
+!!((x)&MASK_ZERO_IF_OVER(888))+!!((x)&MASK_ZERO_I F_OVER(889))\
+!!((x)&MASK_ZERO_IF_OVER(890))+!!((x)&MASK_ZERO_I F_OVER(891))\
+!!((x)&MASK_ZERO_IF_OVER(892))+!!((x)&MASK_ZERO_I F_OVER(893))\
+!!((x)&MASK_ZERO_IF_OVER(894))+!!((x)&MASK_ZERO_I F_OVER(895))\
+!!((x)&MASK_ZERO_IF_OVER(896))+!!((x)&MASK_ZERO_I F_OVER(897))\
+!!((x)&MASK_ZERO_IF_OVER(898))+!!((x)&MASK_ZERO_I F_OVER(899))\
+!!((x)&MASK_ZERO_IF_OVER(900))+!!((x)&MASK_ZERO_I F_OVER(901))\
+!!((x)&MASK_ZERO_IF_OVER(902))+!!((x)&MASK_ZERO_I F_OVER(903))\
+!!((x)&MASK_ZERO_IF_OVER(904))+!!((x)&MASK_ZERO_I F_OVER(905))\
+!!((x)&MASK_ZERO_IF_OVER(906))+!!((x)&MASK_ZERO_I F_OVER(907))\
+!!((x)&MASK_ZERO_IF_OVER(908))+!!((x)&MASK_ZERO_I F_OVER(909))\
+!!((x)&MASK_ZERO_IF_OVER(910))+!!((x)&MASK_ZERO_I F_OVER(911))\
+!!((x)&MASK_ZERO_IF_OVER(912))+!!((x)&MASK_ZERO_I F_OVER(913))\
+!!((x)&MASK_ZERO_IF_OVER(914))+!!((x)&MASK_ZERO_I F_OVER(915))\
+!!((x)&MASK_ZERO_IF_OVER(916))+!!((x)&MASK_ZERO_I F_OVER(917))\
+!!((x)&MASK_ZERO_IF_OVER(918))+!!((x)&MASK_ZERO_I F_OVER(919))\
+!!((x)&MASK_ZERO_IF_OVER(920))+!!((x)&MASK_ZERO_I F_OVER(921))\
+!!((x)&MASK_ZERO_IF_OVER(922))+!!((x)&MASK_ZERO_I F_OVER(923))\
+!!((x)&MASK_ZERO_IF_OVER(924))+!!((x)&MASK_ZERO_I F_OVER(925))\
+!!((x)&MASK_ZERO_IF_OVER(926))+!!((x)&MASK_ZERO_I F_OVER(927))\
+!!((x)&MASK_ZERO_IF_OVER(928))+!!((x)&MASK_ZERO_I F_OVER(929))\
+!!((x)&MASK_ZERO_IF_OVER(930))+!!((x)&MASK_ZERO_I F_OVER(931))\
+!!((x)&MASK_ZERO_IF_OVER(932))+!!((x)&MASK_ZERO_I F_OVER(933))\
+!!((x)&MASK_ZERO_IF_OVER(934))+!!((x)&MASK_ZERO_I F_OVER(935))\
+!!((x)&MASK_ZERO_IF_OVER(936))+!!((x)&MASK_ZERO_I F_OVER(937))\
+!!((x)&MASK_ZERO_IF_OVER(938))+!!((x)&MASK_ZERO_I F_OVER(939))\
+!!((x)&MASK_ZERO_IF_OVER(940))+!!((x)&MASK_ZERO_I F_OVER(941))\
+!!((x)&MASK_ZERO_IF_OVER(942))+!!((x)&MASK_ZERO_I F_OVER(943))\
+!!((x)&MASK_ZERO_IF_OVER(944))+!!((x)&MASK_ZERO_I F_OVER(945))\
+!!((x)&MASK_ZERO_IF_OVER(946))+!!((x)&MASK_ZERO_I F_OVER(947))\
+!!((x)&MASK_ZERO_IF_OVER(948))+!!((x)&MASK_ZERO_I F_OVER(949))\
+!!((x)&MASK_ZERO_IF_OVER(950))+!!((x)&MASK_ZERO_I F_OVER(951))\
+!!((x)&MASK_ZERO_IF_OVER(952))+!!((x)&MASK_ZERO_I F_OVER(953))\
+!!((x)&MASK_ZERO_IF_OVER(954))+!!((x)&MASK_ZERO_I F_OVER(955))\
+!!((x)&MASK_ZERO_IF_OVER(956))+!!((x)&MASK_ZERO_I F_OVER(957))\
+!!((x)&MASK_ZERO_IF_OVER(958))+!!((x)&MASK_ZERO_I F_OVER(959))\
+!!((x)&MASK_ZERO_IF_OVER(960))+!!((x)&MASK_ZERO_I F_OVER(961))\
+!!((x)&MASK_ZERO_IF_OVER(962))+!!((x)&MASK_ZERO_I F_OVER(963))\
+!!((x)&MASK_ZERO_IF_OVER(964))+!!((x)&MASK_ZERO_I F_OVER(965))\
+!!((x)&MASK_ZERO_IF_OVER(966))+!!((x)&MASK_ZERO_I F_OVER(967))\
+!!((x)&MASK_ZERO_IF_OVER(968))+!!((x)&MASK_ZERO_I F_OVER(969))\
+!!((x)&MASK_ZERO_IF_OVER(970))+!!((x)&MASK_ZERO_I F_OVER(971))\
+!!((x)&MASK_ZERO_IF_OVER(972))+!!((x)&MASK_ZERO_I F_OVER(973))\
+!!((x)&MASK_ZERO_IF_OVER(974))+!!((x)&MASK_ZERO_I F_OVER(975))\
+!!((x)&MASK_ZERO_IF_OVER(976))+!!((x)&MASK_ZERO_I F_OVER(977))\
+!!((x)&MASK_ZERO_IF_OVER(978))+!!((x)&MASK_ZERO_I F_OVER(979))\
+!!((x)&MASK_ZERO_IF_OVER(980))+!!((x)&MASK_ZERO_I F_OVER(981))\
+!!((x)&MASK_ZERO_IF_OVER(982))+!!((x)&MASK_ZERO_I F_OVER(983))\
+!!((x)&MASK_ZERO_IF_OVER(984))+!!((x)&MASK_ZERO_I F_OVER(985))\
+!!((x)&MASK_ZERO_IF_OVER(986))+!!((x)&MASK_ZERO_I F_OVER(987))\
+!!((x)&MASK_ZERO_IF_OVER(988))+!!((x)&MASK_ZERO_I F_OVER(989))\
+!!((x)&MASK_ZERO_IF_OVER(990))+!!((x)&MASK_ZERO_I F_OVER(991))\
+!!((x)&MASK_ZERO_IF_OVER(992))+!!((x)&MASK_ZERO_I F_OVER(993))\
+!!((x)&MASK_ZERO_IF_OVER(994))+!!((x)&MASK_ZERO_I F_OVER(995))\
+!!((x)&MASK_ZERO_IF_OVER(996))+!!((x)&MASK_ZERO_I F_OVER(997))\
+!!((x)&MASK_ZERO_IF_OVER(998))+!!((x)&MASK_ZERO_I F_OVER(999))\
+!!((x)&MASK_ZERO_IF_OVER(1000))+!!((x)&MASK_ZERO_ IF_OVER(1001))\
+!!((x)&MASK_ZERO_IF_OVER(1002))+!!((x)&MASK_ZERO_ IF_OVER(1003))\
+!!((x)&MASK_ZERO_IF_OVER(1004))+!!((x)&MASK_ZERO_ IF_OVER(1005))\
+!!((x)&MASK_ZERO_IF_OVER(1006))+!!((x)&MASK_ZERO_ IF_OVER(1007))\
+!!((x)&MASK_ZERO_IF_OVER(1008))+!!((x)&MASK_ZERO_ IF_OVER(1009))\
+!!((x)&MASK_ZERO_IF_OVER(1010))+!!((x)&MASK_ZERO_ IF_OVER(1011))\
+!!((x)&MASK_ZERO_IF_OVER(1012))+!!((x)&MASK_ZERO_ IF_OVER(1013))\
+!!((x)&MASK_ZERO_IF_OVER(1014))+!!((x)&MASK_ZERO_ IF_OVER(1015))\
+!!((x)&MASK_ZERO_IF_OVER(1016))+!!((x)&MASK_ZERO_ IF_OVER(1017))\
+!!((x)&MASK_ZERO_IF_OVER(1018))+!!((x)&MASK_ZERO_ IF_OVER(1019))\
+!!((x)&MASK_ZERO_IF_OVER(1020))+!!((x)&MASK_ZERO_ IF_OVER(1021))\
+!!((x)&MASK_ZERO_IF_OVER(1022))+!!((x)&MASK_ZERO_ IF_OVER(1023)))

#define QUANT_BITS_SET(x) QUANT_BITS_SET_RAW( ((BitsSetType)(x)) )

#include <stdio.h>

int main(void)
{
/* Let's make an array of length 4 */

int arr[QUANT_BITS_SET(17)];

printf("%u",(unsigned)(sizeof arr/sizeof*arr));
}

The codes prints 2 on my system, but you get the idea...

--

Frederick Gotham
Sep 13 '06 #18
Frederick Gotham wrote:
Richard Heathfield posted:
>Your solution doesn't appear to cope with integers wider than 21 bits,
for a start. Secondly, it doesn't cope with integers that are /fewer/
than 21 bits wide! Thirdly, it could conceivably be counting "padding
bits", bits that do not contribute to the value of the integer.
<snip>
>"If the value of the right operand is negative or is greater
than or equal to the width in bits of the promoted left operand, the
behavior is undefined."


I meant something more along the following lines. Do compilers nowadays
optimise away Bitwise-AND operations when one of the operands is known to
be 0 at compile-time?
Yes. Constant folding is somethings almost all compilers handle well. (Those
that don't are usually too hopeless to consider for optmizing purposes.)
Take care that the compiler still has to parse the entire expression; some
low-end systems may barf on such enormous constructs.
/* Macro: QUANT_BITS_SET

This macro determines the quantity of bits which
are set in an integer expression whose signedness
is unsigned.
I prefer my desktop calculator, but YMMV.

<snip>
/* Let's make an array of length 4 */
A misleading comment if ever I saw one.
int arr[QUANT_BITS_SET(17)];
Hmm, I'd write that

#define BITS_SET_IN_THE_BINARY_REPRESENTATION_OF_17 2
[...]
int arr[BITS_SET_IN_THE_BINARY_REPRESENTATION_OF_17]

Still uses a rather unwieldy macro, but it's a tad shorter nonetheless.
printf("%u",(unsigned)(sizeof arr/sizeof*arr));
}

The codes prints 2 on my system, but you get the idea...
Yes, the preprocessor is neat.

S.
Sep 14 '06 #19
Skarmander posted:
Constant folding is somethings almost all compilers handle well. (Those
that don't are usually too hopeless to consider for optmizing
purposes.)

We could always write more macros which use the ternary operator to get rid
of redundant Bitwise-AND operations. Something like:

#define BitwiseAND(x,mask) (\
(mask) \
? x & (mask) \
: 0 )

And we could even go further to strip out the additions:

#define Add(a,b) (\
(b) \
? (a) + (b) \
: (a) )

> /* Let's make an array of length 4 */
A misleading comment if ever I saw one.

Don't ask me how, but I somehow calculated that 17 had four bits set.

--

Frederick Gotham
Sep 14 '06 #20
Frederick Gotham wrote:
Richard Heathfield posted:
Your solution doesn't appear to cope with integers wider than 21 bits,
for a start. Secondly, it doesn't cope with integers that are /fewer/
than 21 bits wide! Thirdly, it could conceivably be counting "padding
bits", bits that do not contribute to the value of the integer.
<snip>
"If the value of the right operand is negative or is greater
than or equal to the width in bits of the promoted left operand, the
behavior is undefined."

I meant something more along the following lines. Do compilers nowadays
optimise away Bitwise-AND operations when one of the operands is known to
be 0 at compile-time?

/* Macro: QUANT_BITS_SET

This macro determines the quantity of bits which
are set in an integer expression whose signedness
is unsigned.

The typedef, "BitsSetType", specifies the unsigned
integer type which is to be used for processing.

Before processing takes place, the argument is
converted to BitsSetType.

This macro should work with unsigned integer types
as wide as 1024 bits.

NB: This macro evaluates its argument more than once.
*/

typedef unsigned BitsSetType;

#define IMAX_BITS(m) (\
(m) /((m)%0x3fffffffL+1) /0x3fffffffL \
%0x3fffffffL *30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 \
+ 4 \
- 12 / ((m)%31+3) )

#define MASK_ZERO_IF_OVER(shift_left_by) (\
shift_left_by < IMAX_BITS((BitsSetType)-1) \
? (BitsSetType)1 << shift_left_by \
: (BitsSetType)0 )

#define QUANT_BITS_SET_RAW(x) (\
!!((x)&(BitsSetType)1) + !!((x)&(BitsSetType)1<<1) \
+!!((x)&(BitsSetType)1<<2) + !!((x)&(BitsSetType)1<<3) \
....
+!!((x)&(BitsSetType)1<<14) +!!((x)&(BitsSetType)1<<15) \
+!!((x)&MASK_ZERO_IF_OVER(16))+!!((x)&MASK_ZERO_IF _OVER(17))\
....
+!!((x)&MASK_ZERO_IF_OVER(1022))+!!((x)&MASK_ZERO_ IF_OVER(1023)))

#define QUANT_BITS_SET(x) QUANT_BITS_SET_RAW( ((BitsSetType)(x)) )

#include <stdio.h>

int main(void)
{
/* Let's make an array of length 4 */

int arr[QUANT_BITS_SET(17)];

printf("%u",(unsigned)(sizeof arr/sizeof*arr));
}

The codes prints 2 on my system, but you get the idea...
You know, you could make that a /lot/ shorter, and I know you're so
very fond of IMAX_BITS, but it's unnecessary and without benefits here.
I haven't verified if everything is strictly portable in this version,
but if not, it should serve as at least a starting point.

#define COUNT_8(x) (!!((x) & 0x80)) + (!!((x) & 0x40)) \
+ (!!((x) & 0x20)) + (!!((x) & 0x10)) \
+ (!!((x) & 0x08)) + (!!((x) & 0x04)) \
+ (!!((x) & 0x02)) + (!!((x) & 0x01))
#define COUNT_16(x) COUNT_8 ((x) & 0xFFULL) \
+ COUNT_8 ((x) >8)
#define COUNT_32(x) COUNT_16 ((x) & 0xFFFFULL) \
+ COUNT_16 ((x) >16)
#define COUNT_64(x) COUNT_32 ((x) & 0xFFFFFFFFULL) \
+ COUNT_32 ((x) >32)
#define COUNT_128(x) COUNT_64 ((x) & 0xFFFFFFFFFFFFFFFFULL) \
+ COUNT_64 ((x) >60 >4)
#define COUNT_256(x) COUNT_128(((1ULL << 60 << 60 << 8) - 1) & (x)) \
+ COUNT_128( (x) >60 >60 >8)
#define COUNT_512(x) COUNT_256(((1ULL << 60 << 60 << 60 << 60 << 16) -
1) & (x)) \
+ COUNT_256( (x) >60 >60 >60 >60 >16)
#define COUNT_1024(x)COUNT_512(((1ULL << 60 << 60 << 60 << 60 << 60 <<
60 << 60 << 60 << 32) - 1) & (x)) \
+ COUNT_512( (x) >60 >60 >60 >60 >60
>60 >60 >60 >32)
#define COUNT(x) COUNT_1024((unsigned long long) x)

You'll need to fix the line wrapping a bit, but it should be obvious
where. (Maybe uintmax_t would be better than unsigned long long.)

Sep 14 '06 #21

Richard Heathfield wrote:
kondal said:

<snip>
There is even better and faster algorithm but difficult to remember.

#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)

int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
return n % 255 ;
}

What makes you think this is faster?
Good question. That % 255 looks expensive to me.
And what happens when the code is
ported to a platform with 64-bit unsigned ints?
The code works as expected.

Did you perhaps mean 256?

Mark Williams

Sep 14 '06 #22

kondal wrote:
Richard Heathfield wrote:
kondal said:

<snip>
There is even better and faster algorithm but difficult to remember.
>
#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)
>
int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
return n % 255 ;
}
What makes you think this is faster? And what happens when the code is
ported to a platform with 64-bit unsigned ints?

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)

There is no loop here to check and no of clock cycles to be executed
would be fas less than my previous algorithm.

Here is the program for 64 bits

#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)
#define MASK_00111100 (((unsigned int)(-1))/257)
A very strange name for this last mask... I guess it doesnt matter, but
still...
>
int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
n = (n & MASK_00111100) + ((n >8) & MASK_00111100) ;
return n % 255 ;
}
Your original code worked for ints up to 255 bits. This version would
work with ints up to 65535 bits if you changed the last line to:

return n % 65535.

Mark Williams

Sep 14 '06 #23
mark said:
>
Richard Heathfield wrote:
>kondal said:
<snip>
return n % 255 ;
}

What makes you think this is faster?

Good question. That % 255 looks expensive to me.
>And what happens when the code is
ported to a platform with 64-bit unsigned ints?

The code works as expected.

Did you perhaps mean 256?
I actually meant n, for arbitrarily large n.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Sep 14 '06 #24
=?utf-8?B?SGFyYWxkIHZhbiBExLNr?= posted:
You know, you could make that a /lot/ shorter, and I know you're so
very fond of IMAX_BITS, but it's unnecessary and without benefits here.
I haven't verified if everything is strictly portable in this version,
but if not, it should serve as at least a starting point.

The whole point of me writing that original code in such a pendantic and
horribly verbose manner is that it would be fully-portable.

I don't see how your code could be used (in the portable sense, that is).

--

Frederick Gotham
Sep 14 '06 #25
Frederick Gotham wrote:
=?utf-8?B?SGFyYWxkIHZhbiBExLNr?= posted:
You know, you could make that a /lot/ shorter, and I know you're so
very fond of IMAX_BITS, but it's unnecessary and without benefits here.
I haven't verified if everything is strictly portable in this version,
but if not, it should serve as at least a starting point.


The whole point of me writing that original code in such a pendantic and
horribly verbose manner is that it would be fully-portable.

I don't see how your code could be used (in the portable sense, that is).
I haven't verified that it's strictly portable, but if there's anything
not strictly portable in it, it's trivially changed, because the
essence of it is valid. If you have a value of at most 1024 bits, you
can get the most significant bits by right-shifting by 512, and if you
do that in multiple operations, you avoid the restrictions on >>'s
right operand. You can get the 512 least significant bits by finding a
mask of either all bits one if there are no more than 512 bits, or
(2**512 - 1) if there are more. (1ULL << 512) (split into multiple
operations) is either 2**512, or 0, because of the rules for unsigned
overflow. So (1ULL << 512) - 1 is exactly that mask. You can repeat
this process to split both 512-bit values into 256-bit values, and so
on. What do you think is wrong with it?

Sep 14 '06 #26
Richard Heathfield wrote:
>
mark said:

Richard Heathfield wrote:
kondal said:
<snip>
return n % 255 ;
}

What makes you think this is faster?
Good question. That % 255 looks expensive to me.
And what happens when the code is
ported to a platform with 64-bit unsigned ints?
The code works as expected.

Did you perhaps mean 256?

I actually meant n, for arbitrarily large n.
The comparitive timing is even more complicated than all that.

The running time for the alternate method, which was:
n = 0;
while(x)
{
x &= (x-1)
n++;
}
is proportional to the final value of (n).

It's conceivable that there may be some situations
where the final value of (n) will usually be low,
and other situations
where the final value of (n) will usually be high.

--
pete
Sep 14 '06 #27

pete wrote:
Richard Heathfield wrote:

mark said:
>
Richard Heathfield wrote:
>kondal said:
>>
<snip>
return n % 255 ;
}
>>
>What makes you think this is faster?
>
Good question. That % 255 looks expensive to me.
>
>And what happens when the code is
>ported to a platform with 64-bit unsigned ints?
>
The code works as expected.
>
Did you perhaps mean 256?
I actually meant n, for arbitrarily large n.

The comparitive timing is even more complicated than all that.

The running time for the alternate method, which was:
n = 0;
while(x)
{
x &= (x-1)
n++;
}
is proportional to the final value of (n).

It's conceivable that there may be some situations
where the final value of (n) will usually be low,
and other situations
where the final value of (n) will usually be high.

--
pete
Yep, the worst case would be when all the bits are set.

-kondal

Sep 15 '06 #28

Richard Heathfield wrote:
mark said:

Richard Heathfield wrote:
kondal said:
<snip>
return n % 255 ;
}

What makes you think this is faster?
Good question. That % 255 looks expensive to me.
And what happens when the code is
ported to a platform with 64-bit unsigned ints?
The code works as expected.

Did you perhaps mean 256?

I actually meant n, for arbitrarily large n.
I picked the code from somewhere in the net and wrote it in my snippets
notebook during college days. If you had a better code i don't mind
using that and enterring it into my snippets book for reference under
your name.

Do you have the code for arbitrarily large n?

-kondal

Sep 15 '06 #29

mark wrote:
kondal wrote:
Richard Heathfield wrote:
kondal said:
>
<snip>
>
There is even better and faster algorithm but difficult to remember.

#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)

int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
return n % 255 ;
}
>
What makes you think this is faster? And what happens when the code is
ported to a platform with 64-bit unsigned ints?
>
--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
There is no loop here to check and no of clock cycles to be executed
would be fas less than my previous algorithm.

Here is the program for 64 bits

#define MASK_01010101 (((unsigned int)(-1))/3)
#define MASK_00110011 (((unsigned int)(-1))/5)
#define MASK_00001111 (((unsigned int)(-1))/17)
#define MASK_00111100 (((unsigned int)(-1))/257)

A very strange name for this last mask... I guess it doesnt matter, but
still...
I really didn't check about the macro naming, I just kept something,
sorry for misleading,

int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
n = (n & MASK_00111100) + ((n >8) & MASK_00111100) ;
return n % 255 ;
}

Your original code worked for ints up to 255 bits. This version would
work with ints up to 65535 bits if you changed the last line to:

return n % 65535.
really!!, its a good point :) I haven't checked that.

-kondal

Sep 15 '06 #30

mark wrote:
int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
n = (n & MASK_00111100) + ((n >8) & MASK_00111100) ;
return n % 255 ;
}

Your original code worked for ints up to 255 bits. This version would
work with ints up to 65535 bits if you changed the last line to:

return n % 65535.

Mark Williams

1) I think MASK_00111100 should be MASK_11111111

2) return n % 65535 is expensive !
That should be changed to n & 65535.
OR
n % 65536 and hopefully the compiler will optimize it into n & 65535

Sep 15 '06 #31
"Samuel Stearley" <ny****@gmail.comwrites:
mark wrote:
int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
n = (n & MASK_00111100) + ((n >8) & MASK_00111100) ;
return n % 255 ;
}

Your original code worked for ints up to 255 bits. This version would
work with ints up to 65535 bits if you changed the last line to:

return n % 65535.

Mark Williams


1) I think MASK_00111100 should be MASK_11111111

2) return n % 65535 is expensive !
That should be changed to n & 65535.
OR
n % 65536 and hopefully the compiler will optimize it into n & 65535
You have missed that n%255 or n%65535 does a sideaway
addition (exactly like in base 10, summing the digits is
doing %9).

Someone did measurements on several variants:
http://www-db.stanford.edu/~manku/bi.../bitcount.html
As expected, table lookup are on top... but they need the
table to be in cache.

Yours,

--
Jean-Marc
Sep 15 '06 #32
Cool I've learned something new today

Jean-Marc Bourguet wrote:
"Samuel Stearley" <ny****@gmail.comwrites:
mark wrote:
int bitcount (unsigned int n)
{
n = (n & MASK_01010101) + ((n >1) & MASK_01010101) ;
n = (n & MASK_00110011) + ((n >2) & MASK_00110011) ;
n = (n & MASK_00001111) + ((n >4) & MASK_00001111) ;
n = (n & MASK_00111100) + ((n >8) & MASK_00111100) ;
return n % 255 ;
}

Your original code worked for ints up to 255 bits. This version would
work with ints up to 65535 bits if you changed the last line to:

return n % 65535.

Mark Williams

1) I think MASK_00111100 should be MASK_11111111

2) return n % 65535 is expensive !
That should be changed to n & 65535.
OR
n % 65536 and hopefully the compiler will optimize it into n & 65535

You have missed that n%255 or n%65535 does a sideaway
addition (exactly like in base 10, summing the digits is
doing %9).

Someone did measurements on several variants:
http://www-db.stanford.edu/~manku/bi.../bitcount.html
As expected, table lookup are on top... but they need the
table to be in cache.

Yours,

--
Jean-Marc
Sep 15 '06 #33
kondal said:

<snip>
Do you have the code for arbitrarily large n?
Did I not already post, in this very thread, code that works for arbitrarily
large n? I could have sworn I did.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Sep 15 '06 #34
=?utf-8?B?SGFyYWxkIHZhbiBExLNr?= posted:
What do you think is wrong with it?

Sorry, I was mistaken, your code is good. I've been playing around with it
for a while, and this is my latest effort. I don't know why, but my PC
freezes when I try to compile it. Nonetheless, here it is, a compile-time
constant specifying the quantity of bits which are set in an unsigned
integer expression.

(Note that I append "RAW" to the name of a macro function to indicate that
it doesn't parenthesize its arguments.)

/* Source file: core.c */

#include "header.hpp"

#include <stdio.h>

unsigned const arr[] =
{1,3,5,7,9,
667,668,669,770,
1022,1023,1024,1025};

int main(void)
{
unsigned const *p = arr,
*const pover = arr + sizeof arr/sizeof*arr;

unsigned register i;

do i = *p++, printf("%u",QUANT_BITS_SET(i));
while(p!=pover);

return 0;
}
/* Header file: header.h */

#define IMAX_BITS_RAW(m) (\
m /(m%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
+ m%0x3fffffffL /(m%31+1)/31%31*5 \
+ 4 \
- 12 /(m%31+3) )
/* Macro: MAX_VALUE_FOR_UNSIGNED_TYPE_OF_EXPR_RAW

This macro takes an integer expression, and
yields the highest value for the unsigned integer
type corresponding to the expression (after
integer promotion takes place). */

#define MAX_VALUE_FOR_UNSIGNED_TYPE_OF_EXPR_RAW(expr)\
(expr-expr-1U)
/* Macro: BITS_IN_UNSIGNED_TYPE_OF_EXPR_RAW

This macro takes an integer expression, and
yields the amount of value representation bits
in the unsigned integer type corresponding
to the expression. (after integer promotion
takes place)*/

#define BITS_IN_UNSIGNED_TYPE_OF_EXPR_RAW(expr) \
IMAX_BITS_RAW( \
MAX_VALUE_FOR_UNSIGNED_TYPE_OF_EXPR_RAW(expr) )
/* Macro: RIGHT_SHIFT_NO_REDUNDANT_RAW

This macro provides a "safe" form of
right shifting, whereby the amount of bits
to shift by may be larger than the actual
type (in such cases, it yields zero.)*/

#define RIGHT_SHIFT_NO_REDUNDANT_RAW(x,spaces) (\
BITS_IN_UNSIGNED_TYPE_OF_EXPR_RAW(x) spaces \
? x >s \
: x - x )

#define QUANT_BITS_SET_16_RAW(x) (\
!!(x&1U) + !!(x&1U<<1) \
+!!(x&1U<<2) + !!(x&1U<<3) \
+!!(x&1U<<4) + !!(x&1U<<5) \
+!!(x&1U<<6) + !!(x&1U<<7) \
+!!(x&1U<<8) + !!(x&1U<<9) \
+!!(x&1U<<10) +!!(x&1U<<11) \
+!!(x&1U<<12) +!!(x&1U<<13) \
+!!(x&1U<<14) +!!(x&1U<<15) )

#define QUANT_BITS_SET_32_RAW(x)\
QUANT_BITS_SET_16_RAW(x)\
+ QUANT_BITS_SET_16_RAW(RIGHT_SHIFT_NO_REDUNDANT_RAW (x,16))

#define QUANT_BITS_SET_64_RAW(x)\
QUANT_BITS_SET_32_RAW(x)\
+ QUANT_BITS_SET_32_RAW(RIGHT_SHIFT_NO_REDUNDANT_RAW (x,32))

#define QUANT_BITS_SET_128_RAW(x)\
QUANT_BITS_SET_64_RAW(x)\
+ QUANT_BITS_SET_64_RAW(RIGHT_SHIFT_NO_REDUNDANT_RAW (x,64))

#define QUANT_BITS_SET_256_RAW(x)\
QUANT_BITS_SET_128_RAW(x)\
+ QUANT_BITS_SET_128_RAW(RIGHT_SHIFT_NO_REDUNDANT_RA W(x,128))

#define QUANT_BITS_SET_512_RAW(x)\
QUANT_BITS_SET_256_RAW(x)\
+ QUANT_BITS_SET_256_RAW(RIGHT_SHIFT_NO_REDUNDANT_RA W(x,256))

#define QUANT_BITS_SET_1024_RAW(x)\
QUANT_BITS_SET_512_RAW(x)\
+ QUANT_BITS_SET_512_RAW(RIGHT_SHIFT_NO_REDUNDANT_RA W(x,512))

#define QUANT_BITS_SET(x)\
(QUANT_BITS_SET_1024_RAW((x)))

--

Frederick Gotham
Sep 15 '06 #35

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

26 posts views Thread by G Patel | last post: by
14 posts views Thread by vib | last post: by
36 posts views Thread by Digital Puer | last post: by
64 posts views Thread by yossi.kreinin | last post: by
40 posts views Thread by KG | last post: by
6 posts views Thread by John Messenger | last post: by
reply views Thread by NPC403 | last post: by
reply views Thread by kmladenovski | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.