On Jun 17, 6:51*am, badc0...@gmail.com wrote:
Tomás Ó hÉilidhe wrote:
Here's a macro that Mathew Hendry posted back in the year 2000 for
achieving binary integer literals that evaluate to compile-time
constants:
* #define BIN8(n)\
* * (((0x##n##ul&1<< 0)>0)|((0x##n##ul&1<< 4)>3)\
* * |((0x##n##ul&1<< 8)>6)|((0x##n##ul&1<<12)>9)\
* * |((0x##n##ul&1<<16)>>12)|((0x##n##ul&1<<20)>>15)\
* * |((0x##n##ul&1<<24)>>18)|((0x##n##ul&1<<28)>>21))
Now admittedly I don't know how it works mathematically
The 0, 4, 8, ... correspond to the "bit position" when interpreted as
a hexadecimal value.
For example, the "1" at '0b00100000' occupies bit 20 in 0x00100000,
so
0x00100000 & (1 << 20) isolates that bit, and
0x00100000 >15 "moves it (back) to its 'proper' hexadecimal
position.
, but still I
want to perfect it. The first thing I did was made it more readable
(in my own opinion of course):
... so that gives us:
#define BIN8(n)\
* * (
* * * * * * ((0x##n##ul & 1lu<<0) >0) * *| * * ((0x##n##ul & 1lu<<0)
>3) * *\
* * * * | * ((0x##n##ul & 1lu<<8) >6) * *| * * ((0x##n##ul &
1lu<<12)>9) * *\
* * * * | * ((0x##n##ul & 1lu<<16)>>12) * *| * * ((0x##n##ul &
1lu<<20)>>15) * *\
* * * * | * ((0x##n##ul & 1lu<<24)>>18) * *| * * ((0x##n##ul &
1lu<<28)>>21) * *\
* * )
Is that perfect now?
Two things come to mind:
a) it doesn't cope well with usenet (re-)formatting
b) you have the original "ul" mixed with your "lu". I'd like it better
if all suffixes were the same.
- Too much repetition. Adding 0x and UL can be done by a helper macro.
- Suggest parentheses for awkward precedence of & relative to <<:
#define HEX_CODED_BIN(N)\
(((N & 1 << 0) >0)|((N & 1 << 4) > 3)\
|((N & 1 << 8) >6)|((N & 1 << 12) > 9)\
|((N & 1 << 16) >>12)|((N & 1 << 20) >15)\
|((N & 1 << 24) >>18)|((N & 1 << 28) >21))
#define BIN8(BITS) HEX_CODED_BIN(0x ## BITS ## UL)
Furthermore, the shifting can be done first and then the masking,
which simplifies the choice of shift values:
#define HEX_CODED_BIN(N) \
((((N > 0) & 1) << 0) | (((N >16) & 1) << 4) | \
(((N > 4) & 1) << 1) | (((N >20) & 1) << 5) |\
(((N > 8) & 1) << 2) | (((N >24) & 1) << 6) | \
(((N >12) & 1) << 3) | (((N >28) & 1) << 7))
See? The logic is is a lot clearer now, because offsets in hex space
don't have to be translated into shift amounts in binary space. The 0,
4, 8, 12 ... values are obvious: we are shifting a hex digit into the
least significant digit position. The & 1 tells us we are masking out
a 0 or 1, and the 0, 1, 2, 3 ... shifts are obvious also: shifting a
bit into the correct position within the byte.
I transposed the calculation into columns, for further readability.
- Remark: A BIN32 macro is easy to make:
#define BIN32(A, B, C, D) \
(BIN8(A) << 24) | (BIN8(B) << 16) | (BIN8(C) << 8) | (BIN8(D))
- Complete program:
#include <stdio.h>
#define HEX_CODED_BIN(N) \
((((N > 0) & 1) << 0) | (((N >16) & 1) << 4) | \
(((N > 4) & 1) << 1) | (((N >20) & 1) << 5) | \
(((N > 8) & 1) << 2) | (((N >24) & 1) << 6) | \
(((N >12) & 1) << 3) | (((N >28) & 1) << 7))
#define BIN8(BITS) HEX_CODED_BIN(0x ## BITS ## UL)
#define BIN32(A, B, C, D) \
(BIN8(A) << 24) | (BIN8(B) << 16) | (BIN8(C) << 8) | (BIN8(D))
int main(void)
{
unsigned int bin1 = BIN8(10101010);
unsigned int bin2 = BIN8(01010101);
unsigned int bin3 = BIN8(11111111);
unsigned int bin4 = BIN8(00000000);
unsigned long bin5 = BIN32(10101010, 01010101, 11110000,
00001111);
printf("bin1 == %x\n", bin1);
printf("bin2 == %x\n", bin2);
printf("bin3 == %x\n", bin3);
printf("bin4 == %x\n", bin4);
printf("bin5 == %lx\n", bin5);
return 0;
}
Output:
bin1 == aa
bin2 == 55
bin3 == ff
bin4 == 0
bin5 == aa55f00f
Cheers.