471,582 Members | 1,686 Online

# Convert 32 bit unsigned int to 16 bit signed int.

Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. Any ideas/help greatly
appreciated.

Thanks.
Sep 11 '08 #1
28 18663
On Sep 11, 12:56*pm, Fore <brian.william...@blueyonder.co.ukwrote:
Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. *All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. *Any ideas/help greatly
appreciated.
1. Remove the word "efficient" from your lexicon. Study Knuth's/
Hoare's law.
2. Have you not studied the bitwise operators?
Sep 11 '08 #2
On 11 Sep, 21:31, red floyd <redfl...@gmail.comwrote:
On Sep 11, 12:56*pm, Fore <brian.william...@blueyonder.co.ukwrote:
Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. *All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. *Any ideas/help greatly
appreciated.

1. Remove the word "efficient" from your lexicon. *Study Knuth's/
Hoare's law.
2. Have you not studied the bitwise operators?
Answer to 1. I accept removal of efficient and no to ...laws.
Answer to 2. is yes, but I also want to be able to achieve this
without a host of complier warnings about the possible loss of data.
Sep 11 '08 #3
Fore wrote:
On 11 Sep, 21:31, red floyd <redfl...@gmail.comwrote:
>On Sep 11, 12:56 pm, Fore <brian.william...@blueyonder.co.ukwrote:
>>Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. Any ideas/help greatly
appreciated.
1. Remove the word "efficient" from your lexicon. Study Knuth's/
Hoare's law.
2. Have you not studied the bitwise operators?

Answer to 1. I accept removal of efficient and no to ...laws.
Answer to 2. is yes, but I also want to be able to achieve this
without a host of complier warnings about the possible loss of data.
If compiler warnings bother you, disable them. Going from 32 bits to 16
bit *will cause* loss of data (the top 16 bits), the compiler cannot let
it go without a warning *unless* you tell it not to warn you.

V
--
Sep 11 '08 #4
On Thu, 11 Sep 2008 12:56:54 -0700 (PDT), Fore
<br**************@blueyonder.co.ukwrote in comp.lang.c++:
Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. Any ideas/help greatly
appreciated.
Aside from the obvious nonsense of "efficient", as others have already
mentioned, you haven't provided an adequate enough definition of the
problem to allow anyone to suggest ANY implementation.

You haven't told us how you plan to translate an larger unsigned value
to a smaller unsigned value. How do you decide which values are
negative?

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
Sep 12 '08 #5
In article
Triple-DES <De**********@gmail.comwrote:
[...]
I interpreted the OP's specification as if he wanted to extract the
value of the lower 16 bits interpreted as a 2's complement bit
pattern.

short low_16_2sc(unsigned n)
{
return (n & 0x7fffu) - (n & 0x8000u);
}
Isn't this non-portable? It seems you'd need to cast both sub-expressions
to int, otherwise the entire return expression will be unsigned, making it
implementation-defined what you get when you convert to short a value that
should become negative. Casting both sub-expressions to int should fix
that. The first is necessary to prevent the compiler from converting the
second sub-expression back to unsigned.

return (int) (n & 0x7fffu) - (int) (n & 0x8000u);

Another approach:

short low_16_2sc( unsigned n )
{
return (int) ((n & 0xFFFF) ^ 0x8000) - 0x8000;
}
Sep 12 '08 #6
Fore wrote:
Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. Any ideas/help greatly
appreciated.
Maybe it's just me, but I fail to see how a simple

short n = short(value);

wouldn't do what you want (well, assuming 'short' is 16 bits in your
system, which it usually is). I'm probably missing something.
Sep 12 '08 #7
On Sep 12, 10:16 am, Juha Nieminen <nos...@thanks.invalidwrote:
Fore wrote:
I am looking for some effecient way to convert a 32 bit
unsigned integer to a 16 bit signed integer. All I want is
the lower 16 bits of the 32 bit unsigned integer , with bit
15 (0..15) to used as the sign bit for the 16 bit signed
integer. Any ideas/help greatly appreciated.
Maybe it's just me, but I fail to see how a simple
short n = short(value);
wouldn't do what you want (well, assuming 'short' is 16 bits
in your system, which it usually is). I'm probably missing
something.
According to the standard, "If the destination type is signed,
the value is unchanged if it can be represented in the
destination type (and bit-field width); otherwise, the value is
implementation-defined." The C standard is slightly more
restrictive: "When a value with integer type is converted to
another integer type other than _Bool, if the value can be
represented by the new type, it is unchanged. [...]Otherwise,
the new type is signed and the value cannot be represented in
it; either the result is implementation-defined or an
implementation-defined signal is raised."

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Sep 12 '08 #8
On Thu, 11 Sep 2008 12:56:54 -0700 (PDT), Fore
<br**************@blueyonder.co.ukwrote:
>Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. Any ideas/help greatly
appreciated.

Thanks.
Another method to consider:

short u32_to_i16( unsigned long ul ) {
const unsigned long ul1= 1UL;
if( *((short *)&ul1) )
return *((short *)&ul);
else
return *((short *)&ul + 1);
}

I don't have two different Endian machines to test on, but I think
this should be portable and work on either one. It could be made a bit
faster as follows (define LITTLEENDIAN or not as appropriate for the
particular machine/implementation in use):

#define LITTLEENDIAN

:

#ifdef LITTLEENDIAN

short u32_to_i16( unsigned long ul ) {
return *((short *)&ul);
}

#else

short u32_to_i16( unsigned long ul ) {
return *((short *)&ul + 1);
}

#endif

James Tursa
Sep 14 '08 #9
In article <3d********************************@4ax.com>, James Tursa
<ac*******************@hotmail.comwrote:
On Thu, 11 Sep 2008 12:56:54 -0700 (PDT), Fore
<br**************@blueyonder.co.ukwrote:
Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. Any ideas/help greatly
appreciated.

Thanks.

Another method to consider:

short u32_to_i16( unsigned long ul ) {
const unsigned long ul1= 1UL;
if( *((short *)&ul1) )
return *((short *)&ul);
else
return *((short *)&ul + 1);
}

I don't have two different Endian machines to test on, but I think
this should be portable and work on either one. It could be made a bit
faster as follows (define LITTLEENDIAN or not as appropriate for the
particular machine/implementation in use):

#define LITTLEENDIAN

:

#ifdef LITTLEENDIAN

short u32_to_i16( unsigned long ul ) {
return *((short *)&ul);
}

#else

short u32_to_i16( unsigned long ul ) {
return *((short *)&ul + 1);
}

#endif
I hate to be harsh, but my god, what you just wrote could have simply been
written as

short u32_to_i16( unsigned long ul ) { return (short) ul; }

As with your code, this relies on the machine being two's complement,
having a 16-bit short, and simply taking the low 16 bits without any
overflow checking.
Sep 14 '08 #10
On Sun, 14 Sep 2008 07:34:35 -0500, bl********@gishpuppy.com (blargg)
wrote:
>In article <3d********************************@4ax.com>, James Tursa
<ac*******************@hotmail.comwrote:
>On Thu, 11 Sep 2008 12:56:54 -0700 (PDT), Fore
<br**************@blueyonder.co.ukwrote:
>Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. Any ideas/help greatly
appreciated.

Thanks.

Another method to consider:

short u32_to_i16( unsigned long ul ) {
const unsigned long ul1= 1UL;
if( *((short *)&ul1) )
return *((short *)&ul);
else
return *((short *)&ul + 1);
}

I don't have two different Endian machines to test on, but I think
this should be portable and work on either one. It could be made a bit
faster as follows (define LITTLEENDIAN or not as appropriate for the
particular machine/implementation in use):

#define LITTLEENDIAN

:

#ifdef LITTLEENDIAN

short u32_to_i16( unsigned long ul ) {
return *((short *)&ul);
}

#else

short u32_to_i16( unsigned long ul ) {
return *((short *)&ul + 1);
}

#endif

I hate to be harsh, but my god, what you just wrote could have simply been
written as

short u32_to_i16( unsigned long ul ) { return (short) ul; }

As with your code, this relies on the machine being two's complement,
having a 16-bit short, and simply taking the low 16 bits without any
overflow checking.
Well, I disagree (but I could be wrong). Your posted method, I
believe, *does* do a value copy and invokes undefined behavior if the
ul value overflows a short. i.e., this statement

(short) ul

of converting an unsigned integer into a signed integer is only
defined in the standard if the valut to be converted fits in the
signed integer range. It is undefined and the result is implementation
dependent if the value does not fit. Isn't that correct? My posted
method attempts to avoid this and simply do a bit copy without value
checking. So I don't believe that your post is in fact equivalent to
my post. Maybe the standard gurus could comment on this and correct me
if I am wrong here.

And yes, my post does rely on some assumptions about sizes of short
and long which I should have mentioned.

I am still trying to figure out your "two's complement" comment. Not
sure what this has to do with it. Could you elaborate?

James Tursa
Sep 14 '08 #11
On Sun, 14 Sep 2008 16:12:53 GMT, James Tursa
<ac*******************@hotmail.comwrote:
>
I am still trying to figure out your "two's complement" comment. Not
sure what this has to do with it. Could you elaborate?
Let me clarify. I understand that there may be a difference in the
value of the result depending on whether the signed integer
representation on the machine is 2's complement or 1's complement or
whatever. But my understanding of OP's post was that was what he
wanted, which is why I saw the 2's complement issue as not relevant. I
could be misunderstanding OP here, though. In that case, however, OP
would need to say something like "I want the lower 16-bits interpreted
as a 2's complement signed integer and converted properly even if the
underlying representation is 1's complement and I need to check that
the lowest value is overflow checked ... " etc. etc. etc. I didn't
think he wanted that.

James Tursa
Sep 14 '08 #12
On 12 Sep, 09:39, blargg....@gishpuppy.com (blargg) wrote:
In article

Triple-DES <DenPlettf...@gmail.comwrote:
[...]
I interpreted the OP's specification as if he wanted to extract the
value of *the lower 16 bits interpreted as a 2's complement bit
pattern.
short low_16_2sc(unsigned n)
{
* return (n & 0x7fffu) - (n & 0x8000u);
}

Isn't this non-portable? It seems you'd need to cast both sub-expressions
to int, otherwise the entire return expression will be unsigned, making it
implementation-defined what you get when you convert to short a value that
should become negative. Casting both sub-expressions to int should fix
that. The first is necessary to prevent the compiler from converting the
second sub-expression back to unsigned.

* * return (int) (n & 0x7fffu) - (int) (n & 0x8000u);
Yes, you're absolutely right. My mistake.
Sep 15 '08 #13
On 14 Sep, 18:12, James Tursa <aclassyguywithakno...@hotmail.com>
wrote:
On Sun, 14 Sep 2008 07:34:35 -0500, blargg....@gishpuppy.com (blargg)
wrote:
In article <3d5pc4dkpi60nlh2k5hted8nj4ehhi6...@4ax.com>, James Tursa
<aclassyguywithakno...@hotmail.comwrote:
On Thu, 11 Sep 2008 12:56:54 -0700 (PDT), Fore
<brian.william...@blueyonder.co.ukwrote:
Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. *All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. *Any ideas/help greatly
appreciated.
Thanks.
Another method to consider:
short u32_to_i16( unsigned long ul ) {
* * const unsigned long ul1= 1UL;
* * if( *((short *)&ul1) )
* * * * return *((short *)&ul);
* * else
* * * * return *((short *)&ul + 1);
}
[snip]
I hate to be harsh, but my god, what you just wrote could have simply been
written as
short u32_to_i16( unsigned long ul ) { return (short) ul; }
As with your code, this relies on the machine being two's complement,
having a 16-bit short, and simply taking the low 16 bits without any
overflow checking.

Well, I disagree (but I could be wrong). *Your posted method, I
believe, *does* do a value copy and invokes undefined behavior if the
ul value overflows a short. *i.e., this statement

(short) ul

of converting an unsigned integer into a signed integer is only
defined in the standard if the valut to be converted fits in the
signed integer range. It is undefined and the result is implementation
dependent if the value does not fit. Isn't that correct? My posted
method attempts to avoid this and simply do a bit copy without value
checking. So I don't believe that your post is in fact equivalent to
my post. Maybe the standard gurus could comment on this and correct me
if I am wrong here.
It's true that the result is implementation dependent, but it doesn't
invoke UB.
In your function on the other hand, the cast from unsigned* to short*
yields an unspecified pointer value, and I believe that was blargg's
point.

You are however right that the two functions are not guaranteed to be
equivalent, but again, I don't think that was the point.

DP
Sep 15 '08 #14
On Sun, 14 Sep 2008 23:27:01 -0700 (PDT), Triple-DES
<De**********@gmail.comwrote:
>On 14 Sep, 18:12, James Tursa <aclassyguywithakno...@hotmail.com>
wrote:
>On Sun, 14 Sep 2008 07:34:35 -0500, blargg....@gishpuppy.com (blargg)
wrote:
>In article <3d5pc4dkpi60nlh2k5hted8nj4ehhi6...@4ax.com>, James Tursa
<aclassyguywithakno...@hotmail.comwrote:
>On Thu, 11 Sep 2008 12:56:54 -0700 (PDT), Fore
<brian.william...@blueyonder.co.ukwrote:
>Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. *All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. *Any ideas/help greatly
appreciated.
>Thanks.
>Another method to consider:
>short u32_to_i16( unsigned long ul ) {
* * const unsigned long ul1= 1UL;
* * if( *((short *)&ul1) )
* * * * return *((short *)&ul);
* * else
* * * * return *((short *)&ul + 1);
}

[snip]
>I hate to be harsh, but my god, what you just wrote could have simply been
written as
>short u32_to_i16( unsigned long ul ) { return (short) ul; }
>As with your code, this relies on the machine being two's complement,
having a 16-bit short, and simply taking the low 16 bits without any
overflow checking.

Well, I disagree (but I could be wrong). *Your posted method, I
believe, *does* do a value copy and invokes undefined behavior if the
ul value overflows a short. *i.e., this statement

(short) ul

of converting an unsigned integer into a signed integer is only
defined in the standard if the valut to be converted fits in the
signed integer range. It is undefined and the result is implementation
dependent if the value does not fit. Isn't that correct? My posted
method attempts to avoid this and simply do a bit copy without value
checking. So I don't believe that your post is in fact equivalent to
my post. Maybe the standard gurus could comment on this and correct me
if I am wrong here.

It's true that the result is implementation dependent, but it doesn't
invoke UB.
In your function on the other hand, the cast from unsigned* to short*
yields an unspecified pointer value, and I believe that was blargg's
point.
Ah, thanks! I stand corrected. Although I certainly did not get that
point at all from blargg's comments.

James Tursa
Sep 15 '08 #15
Fore wrote:
Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. Any ideas/help greatly
appreciated.
As nobody has suggested it, yet, there must be a problem with the
following approach.

signed16 to_signed_16(const unsigned32 u)
{
unsigned32 m(u%65536); //the 16 lower bits
return ((signed16) ((m>32768) ? -1 : 1)) * (signed16)(m%32768);
//15 lower bits and the 16th for signedness
}

This did not even produce any warnings (with g++ -Wall, short for
signed16 and unsigned int for unsigned32). Of course I don't know about
the efficiency.

Ralf
Sep 15 '08 #16
In article <71********************************@4ax.com>, James Tursa
<ac*******************@hotmail.comwrote:
On Sun, 14 Sep 2008 23:27:01 -0700 (PDT), Triple-DES
<De**********@gmail.comwrote:
On 14 Sep, 18:12, James Tursa <aclassyguywithakno...@hotmail.com>
wrote:
On Sun, 14 Sep 2008 07:34:35 -0500, blargg....@gishpuppy.com (blargg)
wrote:
In article <3d5pc4dkpi60nlh2k5hted8nj4ehhi6...@4ax.com>, James Tursa
<aclassyguywithakno...@hotmail.comwrote:

On Thu, 11 Sep 2008 12:56:54 -0700 (PDT), Fore
<brian.william...@blueyonder.co.ukwrote:

Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. *All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. *Any ideas/help greatly
appreciated.

Thanks.

Another method to consider:

short u32_to_i16( unsigned long ul ) {
* * const unsigned long ul1= 1UL;
* * if( *((short *)&ul1) )
* * * * return *((short *)&ul);
* * else
* * * * return *((short *)&ul + 1);
}
[snip]
I hate to be harsh, but my god, what you just wrote could have simply been
written as

short u32_to_i16( unsigned long ul ) { return (short) ul; }

As with your code, this relies on the machine being two's complement,
having a 16-bit short, and simply taking the low 16 bits without any
overflow checking.

Well, I disagree (but I could be wrong). *Your posted method, I
believe, *does* do a value copy and invokes undefined behavior if the
ul value overflows a short. *i.e., this statement

(short) ul

of converting an unsigned integer into a signed integer is only
defined in the standard if the valut to be converted fits in the
signed integer range. It is undefined and the result is implementation
dependent if the value does not fit. Isn't that correct? My posted
method attempts to avoid this and simply do a bit copy without value
checking. So I don't believe that your post is in fact equivalent to
my post. Maybe the standard gurus could comment on this and correct me
if I am wrong here.
It's true that the result is implementation dependent, but it doesn't
invoke UB.
In your function on the other hand, the cast from unsigned* to short*
yields an unspecified pointer value, and I believe that was blargg's
point.

Ah, thanks! I stand corrected. Although I certainly did not get that
point at all from blargg's comments.
Yeah sorry about that, they aren't exactly equivalent. Yours relies on
dereferencing a reinterpret_cast, the representation of integers in memory
(little/big/other endian), alignment requirements for short, and two's
complement representation of signed values. Mine relies simply on two's
complement representation, and what the implementation does when
converting a out-of-range value to a smaller signed type. It seems
unlikely that the requirements of your code would be met by some machine,
but not mine, and more likely that the requirements of mine would be met,
but not yours. After all, why would (short) 0xFFFFFFFF not result in -1
but reading the low 16 bits as short (as yours does) result in -1? I
suppose a compiler could do range checking on cast. Anyway, the best
portable solution is the other one I posted, or the one someone else
posted that masks off the sign bit.
Sep 15 '08 #17
In article <48***********************@newsspool1.arcor-online.net>, Ralf
Goertz <r_******@expires-2006-11-30.arcornews.dewrote:
Fore wrote:
Hello
I am looking for some effecient way to convert a 32 bit unsigned
integer to a 16 bit signed integer. All I want is the lower 16 bits
of the 32 bit unsigned integer , with bit 15 (0..15) to used as the
sign bit for the 16 bit signed integer. Any ideas/help greatly
appreciated.

As nobody has suggested it, yet, there must be a problem with the
following approach.

signed16 to_signed_16(const unsigned32 u)
{
unsigned32 m(u%65536); //the 16 lower bits
return ((signed16) ((m>32768) ? -1 : 1)) * (signed16)(m%32768);
//15 lower bits and the 16th for signedness
}

This did not even produce any warnings (with g++ -Wall, short for
signed16 and unsigned int for unsigned32). Of course I don't know about
the efficiency.
It fails for all negative values. 0x8000 (-32768) yields 0, 0x8001
(-32767) yields -1, etc. The high bit of a 16-bit signed value simply has
the place value of -32768, rather than 32768 as in an unsigned value.

signed16 to_signed_16(const unsigned32 u)
{
unsigned32 m(u%65536);
return (signed16) ((m 32767 ? -32768 : 0) + (signed16) (m%32768));
}

But you might as well just write it as

signed16 to_signed_16(const unsigned32 u)
{
return (signed16) ((int) (u & 32767) - (int) (u & 32768));
}

which is essentially how Triple-DES wrote it.
Sep 15 '08 #18
blargg wrote:
>
signed16 to_signed_16(const unsigned32 u)
{
unsigned32 m(u%65536);
return (signed16) ((m 32767 ? -32768 : 0) + (signed16)
(m%32768));
}

But you might as well just write it as

signed16 to_signed_16(const unsigned32 u)
{
return (signed16) ((int) (u & 32767) - (int) (u & 32768)); }

which is essentially how Triple-DES wrote it.
But is that really portable? The reason I chose the % operator instead
of & was that mod is not affected by endianess weirdness. I thought
there were systems which are neither big nor little endian but something
in between. Can we really be sure that (u & 32768) = (u % 32768) ?

Ralf
Sep 15 '08 #19
On 15 Sep, 22:58, James Kanze <james.ka...@gmail.comwrote:
On Sep 15, 3:19 pm, Ralf Goertz
Can we really be sure that (u & 32768) = (u % 32768) ?

Actually, you can be sure that it's not:-). *
Except of course if u is a multiple of 65536 :)
But (u & 32767) is
guaranteed to be the same as (u % 32768).
Can't argue with that.

DP
Sep 16 '08 #20
On Sep 16, 9:45 am, Triple-DES <DenPlettf...@gmail.comwrote:
On 15 Sep, 22:58, James Kanze <james.ka...@gmail.comwrote:
On Sep 15, 3:19 pm, Ralf Goertz
Can we really be sure that (u & 32768) = (u % 32768) ?
Actually, you can be sure that it's not:-).
Except of course if u is a multiple of 65536 :)
Except if u is an even multiple of 65536:-).

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Sep 16 '08 #21
On 16 Sep, 12:02, James Kanze <james.ka...@gmail.comwrote:
On Sep 16, 9:45 am, Triple-DES <DenPlettf...@gmail.comwrote:
On 15 Sep, 22:58, James Kanze <james.ka...@gmail.comwrote:
On Sep 15, 3:19 pm, Ralf Goertz
Can we really be sure that (u & 32768) = (u % 32768) ?
Actually, you can be sure that it's not:-).
Except of course if u is a multiple of 65536 :)

Except if u is an even multiple of 65536:-).
Hmm...that's interesting. Did I make a grammatical error, or isn't
that exactly what I wrote?
Sep 16 '08 #22
Triple-DES schrieb:
On 16 Sep, 12:02, James Kanze <james.ka...@gmail.comwrote:
>On Sep 16, 9:45 am, Triple-DES <DenPlettf...@gmail.comwrote:
>>On 15 Sep, 22:58, James Kanze <james.ka...@gmail.comwrote:
On Sep 15, 3:19 pm, Ralf Goertz
Can we really be sure that (u & 32768) = (u % 32768) ?
Actually, you can be sure that it's not:-).
Except of course if u is a multiple of 65536 :)
Except if u is an even multiple of 65536:-).

Hmm...that's interesting. Did I make a grammatical error, or isn't
that exactly what I wrote?
That's exactly what you wrote except the word "even".

But odd multiples do as well:
(65536 & 32768) is 0, and (65536 % 32768) is also 0.

--
Thomas
Sep 16 '08 #23
On Sep 16, 2:36 pm, "Thomas J. Gritzan" <phygon_antis...@gmx.de>
wrote:
Triple-DES schrieb:
On 16 Sep, 12:02, James Kanze <james.ka...@gmail.comwrote:
On Sep 16, 9:45 am, Triple-DES <DenPlettf...@gmail.comwrote:
>On 15 Sep, 22:58, James Kanze <james.ka...@gmail.comwrote:
On Sep 15, 3:19 pm, Ralf Goertz
Can we really be sure that (u & 32768) = (u % 32768) ?
Actually, you can be sure that it's not:-).
Except of course if u is a multiple of 65536 :)
Except if u is an even multiple of 65536:-).
Hmm...that's interesting. Did I make a grammatical error, or
isn't that exactly what I wrote?
That's exactly what you wrote except the word "even".
But odd multiples do as well:
(65536 & 32768) is 0, and (65536 % 32768) is also 0.
Yep. I was thinking of even multiples of 32768. (Of course,
all multiples of 65536 are even multiples of 32768.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Sep 16 '08 #24
In article
<55**********************************@d1g2000hsg.g ooglegroups.com>,
Triple-DES <De**********@gmail.comwrote:
On 15 Sep, 22:58, James Kanze <james.ka...@gmail.comwrote:
[...]
But (u & 32767) is guaranteed to be the same as (u % 32768).

Can't argue with that.
As long as u is either an unsigned type or a signed type with a
non-negative value.
Sep 16 '08 #25
On Sep 16, 8:31 pm, blargg....@gishpuppy.com (blargg) wrote:
In article
Triple-DES <DenPlettf...@gmail.comwrote:
On 15 Sep, 22:58, James Kanze <james.ka...@gmail.comwrote:
[...]
But (u & 32767) is guaranteed to be the same as (u % 32768).
Can't argue with that.
As long as u is either an unsigned type or a signed type with
a non-negative value.
In the code (cut several postings up in the thread), u was
defined as a 32 bit unsigned.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Sep 16 '08 #26
In article
Kanze <ja*********@gmail.comwrote:

[...]
Good point. In the loop, that should be (1UL << j), and
not just (1 << j). And the error will go unobserved unless you
have a machine with ints smaller that 24 bits (which used to be
quite common).
On this tangent, these days it's difficult to write programs which work
with 16-bit ints, since there's little to test on. One can easily
unintentionally rely on more than 16 bits in int in many unexpected places
that are hard to search for. And using long, or something verbose like
int_least_32_t, everywhere hurts readability. In most of my own code meant
to run on decently powerful machines, I now just put a preprocessor check
that INT_MAX >= 0x7FFFFFFF and use int almost everywhere.

automatically remove it from the quoted text when replying.
Sep 17 '08 #27
On Sep 17, 5:23 pm, blargg....@gishpuppy.com (blargg) wrote:
In article
Kanze <james.ka...@gmail.comwrote:
[...]
Good point. In the loop, that should be (1UL << j), and not
just (1 << j). And the error will go unobserved unless you
have a machine with ints smaller that 24 bits (which used to
be quite common).
On this tangent, these days it's difficult to write programs
which work with 16-bit ints, since there's little to test on.
One can easily unintentionally rely on more than 16 bits in
int in many unexpected places that are hard to search for. And
using long, or something verbose like int_least_32_t,
everywhere hurts readability. In most of my own code meant to
run on decently powerful machines, I now just put a
preprocessor check that INT_MAX >= 0x7FFFFFFF and use int
almost everywhere.
That's really a valid option for a lot of things. I mostly
develope large servers, and there's absolutely no chance of
their running on a 16 bit machine, so there's no point in my
trying to be portable to such. In the past, however...
newsreaders will automatically remove it from the quoted text
There is a space after it when the text is inserted into the
Google Post buffer. After that... I'm afraid I have little
control over what happens.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Sep 17 '08 #28
try using bitwise operations

unsigned int a:15;
Sep 21 '08 #29

### This discussion thread is closed

Replies have been disabled for this discussion.