By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,948 Members | 1,595 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,948 IT Pros & Developers. It's quick & easy.

Compare sign in C++

P: n/a
Hi,

How can we compare sign of numbers in C++ ?

Bye,
Claude
Jul 22 '05 #1
Share this Question
Share on Google+
37 Replies


P: n/a

"Claude Gagnon" <cl**********@videotron.ca> wrote in message news:xg**********************@weber.videotron.net. ..
How can we compare sign of numbers in C++ ?

What do you consider the sign of zero to be?

inline int signof(int a) { return (a == 0) ? 0 : (a<0 ? -1 : 1); }

if(signof(-45) == signof(-4) ) ...
Jul 22 '05 #2

P: n/a
"Claude Gagnon" <cl**********@videotron.ca> wrote...
How can we compare sign of numbers in C++ ?


Depending on what you need, something like

template<class T> bool signTheSame(T t1, T t2)
{
return t1 < 0 == t2 < 0;
}

Victor
Jul 22 '05 #3

P: n/a
Claude Gagnon wrote:
...
How can we compare sign of numbers in C++ ?
...


What do you understand under "sign"? The traditional 'signum' function
(-1, 0, +1) for arithmetical value 'v' can be calculated in C++ as follows

int signum = (v > 0) - (v < 0);

which means that signs of values 'v1' and 'v2' can be compared in the
following manner

bool signs_equal = ((v1 > 0) - (v1 < 0)) == ((v2 > 0) - (v2 < 0));

--
Best regards,
Andrey Tarasevich

Jul 22 '05 #4

P: n/a
most significant bit is 1 if negative and number is signed

"Claude Gagnon" <cl**********@videotron.ca> wrote in message
news:xg**********************@weber.videotron.net. ..
Hi,

How can we compare sign of numbers in C++ ?

Bye,
Claude

Jul 22 '05 #5

P: n/a
"Claude Gagnon" <cl**********@videotron.ca> wrote in message news:<xg**********************@weber.videotron.net >...
Hi,

How can we compare sign of numbers in C++ ?


Is this what you mean?

inline bool SameSign(int a, int b)
{
return (a < 0) == (b < 0);
}

or

template<class T>
inline bool SameSign(T a, T b)
{
return (a < 0) == (b < 0);
}
Jul 22 '05 #6

P: n/a

"dwrayment" <dw*******@rogers.com> wrote in message news:Iz*********************@news01.bloor.is.net.c able.rogers.com...
most significant bit is 1 if negative and number is signed

How do you know what the MSB is? Easier to test against zero.

Jul 22 '05 #7

P: n/a
"Claude Gagnon" <cl**********@videotron.ca> a écrit ...
Hi,

How can we compare sign of numbers in C++ ?

Numbers ?
You can try :
same_sign = ((a*b) > 0)
#define same_sign(a, b) (((a)*(b))>0)
You have to decide about the case of one or two among a and b are 0, and to
be carefull with float types.
Happy new year,
Pierre
Jul 22 '05 #8

P: n/a
a*b is inefficient

if you know your gonna be using say 32 bit numbers all the time then
same sign = (! (a xor b)) >> 31 if mbs(a) == msb(b) then the
negation of a xor b will be 1 at mbs.
or you can do some kind of bit comparsion with 0x8000 0000 to test status
of msb.


"dwrayment" <dw*******@rogers.com> wrote in message
news:Iz*********************@news01.bloor.is.net.c able.rogers.com...
most significant bit is 1 if negative and number is signed

"Claude Gagnon" <cl**********@videotron.ca> wrote in message
news:xg**********************@weber.videotron.net. ..
Hi,

How can we compare sign of numbers in C++ ?

Bye,
Claude


Jul 22 '05 #9

P: n/a
> a*b is inefficient

On what architecture?
if you know your gonna be using say 32 bit numbers all the time then
same sign = (! (a xor b)) >> 31 if mbs(a) == msb(b) then the
negation of a xor b will be 1 at mbs.
Do you mean the same thing by "msb" and "mbs?" I thought mbs was a typo
at first, but you did it twice...
or you can do some kind of bit comparsion with 0x8000 0000 to test status
of msb.


Without the space, right? And still (of course) assuming 32 bits.

Jul 22 '05 #10

P: n/a

"Jeff Schwab" <je******@comcast.net> wrote in message
news:V9********************@comcast.com...
a*b is inefficient
On what architecture?

on any architechure. multiplcation is much less efficient then adding and
bitops.

if you know your gonna be using say 32 bit numbers all the time then
same sign = (! (a xor b)) >> 31 if mbs(a) == msb(b) then the
negation of a xor b will be 1 at mbs.


Do you mean the same thing by "msb" and "mbs?" I thought mbs was a typo
at first, but you did it twice...
or you can do some kind of bit comparsion with 0x8000 0000 to test status of msb.


yes of course i cant spell msb.
Without the space, right? And still (of course) assuming 32 bits.

naturally no space.

Jul 22 '05 #11

P: n/a
dwrayment wrote:
> a*b is inefficient


On what architecture?

on any architechure. multiplcation is much less efficient then adding and
bitops.
...


What is your source of information? Can you provide any evidence to
support this statement?

--
Best regards,
Andrey Tarasevich

Jul 22 '05 #12

P: n/a
i cant give you a direct source, as i dont keep track of books i read. it
is common sense that multipying requires more work and relative to adding
and bit ops is inefficent (by computers standards), so any book about
optimizing code probably has a section on this. that not to say dont ever
multipy as its still pretty dang quick by human standards, but if you can
do something without multipying do it.

"Andrey Tarasevich" <an**************@hotmail.com> wrote in message
news:Ee********************@comcast.com...
dwrayment wrote:
> a*b is inefficient

On what architecture?

on any architechure. multiplcation is much less efficient then adding and bitops.
...


What is your source of information? Can you provide any evidence to
support this statement?

--
Best regards,
Andrey Tarasevich

Jul 22 '05 #13

P: n/a
On Fri, 02 Jan 2004 06:53:47 +0000, dwrayment wrote:

[ Please don't top post, thanx, M4 ]
"Andrey Tarasevich" <an**************@hotmail.com> wrote in message
news:Ee********************@comcast.com...
dwrayment wrote:
>> > a*b is inefficient
>>
>> On what architecture?
>>
> on any architechure. multiplcation is much less efficient then adding and > bitops.
> ...


What is your source of information? Can you provide any evidence to
support this statement?


i cant give you a direct source, as i dont keep track of books i read. it
is common sense that multipying requires more work and relative to adding
and bit ops is inefficent (by computers standards), so any book about
optimizing code probably has a section on this. that not to say dont ever
multipy as its still pretty dang quick by human standards, but if you can
do something without multipying do it.


On modern CPUs, multiply is a one clockcycle instruction and just as fast
as addition or bitwise manipulation. This has been true for a while now.

Besides, compilers are pretty good at optimizing. Shifting left by 4 bits
or multiplying by 16 translate to the same opcodes on any sane compiler.
The occasions are very rare where handoptimizing multiplication into
something else would give a better result than the compiler can give.

Even if it would make a difference, are you going to notice? Not many
programs nowadays on modern hardware (and my main server is a P90!) will
make you want to do this kind of optimization (which it isn't).

HTH,
M4

Jul 22 '05 #14

P: n/a
dwrayment wrote:
i cant give you a direct source, as i dont keep track of books i
read. it is common sense that multipying requires more work and
relative to adding and bit ops is inefficent (by computers
standards)...


"Common sense" has no place in performance measurements.

Indeed, multiplication has historically been faster than addition
in floating point operations.

Stop trying to use "common sense" and start actually measuring
these things.
-tom!
Jul 22 '05 #15

P: n/a

"dwrayment" <dw*******@rogers.com> wrote in message news:%z**********************@news04.bloor.is.net. cable.rogers.com...
i cant give you a direct source, as i dont keep track of books i read. it
is common sense that multipying requires more work and relative to adding
and bit ops is inefficent (by computers standards), so any book about
optimizing code probably has a section on this. that not to say dont ever
multipy as its still pretty dang quick by human standards, but if you can
do something without multipying do it.\


This hasn't been true since the 70's or earlier. Most processors now do integer
operations (multiply or bitwise) in the same amount of time. Multiply hasn't taken
longer since processors had to simulate it with additions.

Jul 22 '05 #16

P: n/a

"Martijn Lievaart" <m@remove.this.part.rtij.nl> wrote in message
news:pa****************************@remove.this.pa rt.rtij.nl...
On Fri, 02 Jan 2004 06:53:47 +0000, dwrayment wrote:

[ Please don't top post, thanx, M4 ]
"Andrey Tarasevich" <an**************@hotmail.com> wrote in message
news:Ee********************@comcast.com...
dwrayment wrote:
>> > a*b is inefficient
>>
>> On what architecture?
>>
> on any architechure. multiplcation is much less efficient then
adding and
> bitops.
> ...

What is your source of information? Can you provide any evidence to
support this statement?


i cant give you a direct source, as i dont keep track of books i read. it is common sense that multipying requires more work and relative to adding and bit ops is inefficent (by computers standards), so any book about
optimizing code probably has a section on this. that not to say dont ever multipy as its still pretty dang quick by human standards, but if you can do something without multipying do it.


On modern CPUs, multiply is a one clockcycle instruction and just as fast
as addition or bitwise manipulation. This has been true for a while now.

Besides, compilers are pretty good at optimizing. Shifting left by 4 bits
or multiplying by 16 translate to the same opcodes on any sane compiler.
The occasions are very rare where handoptimizing multiplication into
something else would give a better result than the compiler can give.

Even if it would make a difference, are you going to notice? Not many
programs nowadays on modern hardware (and my main server is a P90!) will
make you want to do this kind of optimization (which it isn't).

HTH,
M4

faster processers are no excuse for lazy programming. if you can prgram
something to do the same task faster with less work a good programmer will
do so.

as for the replys below i suggest you try doing it yourself. and dont
multipy by a power of 2 as that is equavialent to bit shifting.
mult still is less efficiant than doing other ops.
Jul 22 '05 #17

P: n/a
dwrayment wrote:
faster processers are no excuse for lazy programming. if you can prgram
something to do the same task faster with less work a good programmer will
do so.

as for the replys below i suggest you try doing it yourself. and dont
multipy by a power of 2 as that is equavialent to bit shifting.
mult still is less efficiant than doing other ops.


On some architectures. Can you name one?

Jul 22 '05 #18

P: n/a
dwrayment wrote:
faster processers are no excuse for lazy programming. if you can
prgram something to do the same task faster with less work a
good programmer will do so.
LOL ok dude.
as for the replys below i suggest you try doing it yourself.
and dont multipy by a power of 2 as that is equavialent to bit
shifting. mult still is less efficiant than doing other ops.


You should probably compile some code yourself and look at the
generated (and optimized) output. You won't though, because you
are lazy and think you already know the answer. Then you should
look through the processor info book and see what the various
costs of doing operations are.

so sad,
-tom!
Jul 22 '05 #19

P: n/a
ok last post for me if some of you dont like it theres nothing more i can
say.

first off, im not trying to reinvent the multipy op by simulating it with
adding and bit shifts. the question posed was how can we compare
sign of numbers.

soultion 1 was a*b .... i posed a different solution simply !a xor b >>
31. and now i think about it id rather do
a xor b & 0x70000000 == 0. here multiplying is inefficient because doing a
simple xor and bitwise and is less work than doing a mulitpy (on even the
best of the best of machines).

of course if i were trying to reinvent the wheel and do mulipication by
using adding and bit shifts i wouldnt be able to match the work done by
todays
best engineers. im sure they doing an excellent job. thats all i can say.

"Claude Gagnon" <cl**********@videotron.ca> wrote in message
news:xg**********************@weber.videotron.net. ..
Hi,

How can we compare sign of numbers in C++ ?

Bye,
Claude

Jul 22 '05 #20

P: n/a
dwrayment wrote in
news:Mx********************@twister01.bloor.is.net .cable.rogers.com:
ok last post for me if some of you dont like it theres nothing more i
can say.

first off, im not trying to reinvent the multipy op by simulating it
with adding and bit shifts. the question posed was how can we compare
sign of numbers.

soultion 1 was a*b .... i posed a different solution simply !a xor
b >> 31. and now i think about it id rather do
a xor b & 0x70000000 == 0.
Well this is the *real* problem, all that bit-twideling and you've gone
and masked off the sign bit.
here multiplying is inefficient because
doing a simple xor and bitwise and is less work than doing a mulitpy
(on even the best of the best of machines).


Lets start by working with some correct code:

bool thing( int a, int b )
{
return (a * b) > 0;
}

We can write a truth table (N < 0 and P > 0):

a, b, return
------------
0, 0, false
0, P, false
0, N, false
P, 0, false
P, N, false
P, P, true
N, 0, false
N, P, false
N, N, true

Assuming we have 32 bit 2's compliment machine that is:

bool thing( int a, int b )
{
return (a || b) && ((a >> 31) == (b >> 31));
}

or (if you prefer):

bool thing( int a, int b )
{
return
(a || b)
&&
0 == ( ( unsigned(a) ^ unsigned(b) ) & 0x80000000U )
;
}

Assuming (again) that either of the above is faster than (a * b) < 0,
why wouldn't an optimizing C++ compiler make the change.

The reason others have objected to your assertion that "bit-twideling
is faster", is that not only can the compiler do the job for you, so
can modern CPU's and when the CPU does it you get a double hit as
the generated code is shorter too.
Rob.
--
http://www.victim-prime.dsl.pipex.com/
Jul 22 '05 #21

P: n/a
my apologies it is a xor b & 0x80000000. typo
"Rob Williscroft" <rt*@freenet.REMOVE.co.uk> wrote in message
news:Xn**********************************@195.129. 110.130...
dwrayment wrote in
news:Mx********************@twister01.bloor.is.net .cable.rogers.com:
ok last post for me if some of you dont like it theres nothing more i
can say.

first off, im not trying to reinvent the multipy op by simulating it
with adding and bit shifts. the question posed was how can we compare
sign of numbers.

soultion 1 was a*b .... i posed a different solution simply !a xor
b >> 31. and now i think about it id rather do
a xor b & 0x70000000 == 0.


Well this is the *real* problem, all that bit-twideling and you've gone
and masked off the sign bit.
here multiplying is inefficient because
doing a simple xor and bitwise and is less work than doing a mulitpy
(on even the best of the best of machines).


Lets start by working with some correct code:

bool thing( int a, int b )
{
return (a * b) > 0;
}

We can write a truth table (N < 0 and P > 0):

a, b, return
------------
0, 0, false
0, P, false
0, N, false
P, 0, false
P, N, false
P, P, true
N, 0, false
N, P, false
N, N, true

Assuming we have 32 bit 2's compliment machine that is:

bool thing( int a, int b )
{
return (a || b) && ((a >> 31) == (b >> 31));
}

or (if you prefer):

bool thing( int a, int b )
{
return
(a || b)
&&
0 == ( ( unsigned(a) ^ unsigned(b) ) & 0x80000000U )
;
}

Assuming (again) that either of the above is faster than (a * b) < 0,
why wouldn't an optimizing C++ compiler make the change.

The reason others have objected to your assertion that "bit-twideling
is faster", is that not only can the compiler do the job for you, so
can modern CPU's and when the CPU does it you get a double hit as
the generated code is shorter too.
Rob.
--
http://www.victim-prime.dsl.pipex.com/

Jul 22 '05 #22

P: n/a
dwrayment wrote:
my apologies it is a xor b & 0x80000000. typo
Why would you work at the bit level, tying yourself to a particular size
of integer, just to figure out whether two numbers (a and b) are greater
than a third number (zero)? C++ has built-in operators for this.

-Jeff

PS- Please stop top-posting.
"Rob Williscroft" <rt*@freenet.REMOVE.co.uk> wrote in message
news:Xn**********************************@195.129. 110.130...
dwrayment wrote in
news:Mx********************@twister01.bloor.is.n et.cable.rogers.com:

ok last post for me if some of you dont like it theres nothing more i
can say.

first off, im not trying to reinvent the multipy op by simulating it
with adding and bit shifts. the question posed was how can we compare
sign of numbers.

soultion 1 was a*b .... i posed a different solution simply !a xor
b >> 31. and now i think about it id rather do
a xor b & 0x70000000 == 0.


Well this is the *real* problem, all that bit-twideling and you've gone
and masked off the sign bit.

here multiplying is inefficient because
doing a simple xor and bitwise and is less work than doing a mulitpy
(on even the best of the best of machines).


Lets start by working with some correct code:

bool thing( int a, int b )
{
return (a * b) > 0;
}

We can write a truth table (N < 0 and P > 0):

a, b, return
------------
0, 0, false
0, P, false
0, N, false
P, 0, false
P, N, false
P, P, true
N, 0, false
N, P, false
N, N, true

Assuming we have 32 bit 2's compliment machine that is:

bool thing( int a, int b )
{
return (a || b) && ((a >> 31) == (b >> 31));
}

or (if you prefer):

bool thing( int a, int b )
{
return
(a || b)
&&
0 == ( ( unsigned(a) ^ unsigned(b) ) & 0x80000000U )
;
}

Assuming (again) that either of the above is faster than (a * b) < 0,
why wouldn't an optimizing C++ compiler make the change.

The reason others have objected to your assertion that "bit-twideling
is faster", is that not only can the compiler do the job for you, so
can modern CPU's and when the CPU does it you get a double hit as
the generated code is shorter too.
Rob.
--
http://www.victim-prime.dsl.pipex.com/



Jul 22 '05 #23

P: n/a
> Lets start by working with some correct code:

bool thing( int a, int b )
{
return (a * b) > 0;
}


What about overflow?
Jul 22 '05 #24

P: n/a
heinz baer wrote in news:5a**************************@posting.google.c om:
Lets start by working with some correct code:

bool thing( int a, int b )
{
return (a * b) > 0;
}


What about overflow?


This was the starting point and thus is correct by defenition (it is
the defenition). I used meaningless name "thing" as I don't think its
a good implementation of any "Compare Sign" function.

But you're right my bit twideling alternatives didn't attempt to take
account of (or emmulate) thing()'s overflow behaviour. I may be wrong
but IIUC the result of calling thing( a, b ) when a * b would overflow
would be undefined (or maybe implementation defined) behaviour anyway.

Rob.
--
http://www.victim-prime.dsl.pipex.com/
Jul 22 '05 #25

P: n/a
trying hard not to even look as this post anymore but new messages keep
coming up i guess i want to answer.

1. why restrict numbers to 32 bits

unless your planning on making your function a macro, your doing that
very thing anyway, as the parameter will likely be of type int.
the of course you might wanna make a second function for doubles etc.
you can however make it a macro if you want. in this case point taken
i would stlll choose my solution and make the (sacfrice) since most
number crunching is done with 32 bits anyway.
2. overflow

a xor b & 0x80000000 == 0 has no overflow and does not require checking
for such a thing as it is not a * b. the numbers do not grow larger by
any factor.
3. there no need to case a and b to unsigned as one gentlman did.

hooping this is the last last reply i need to do. hoping the orginal
sender got whatever solution he wants! the rest is all hogwash really.

"Claude Gagnon" <cl**********@videotron.ca> wrote in message
news:xg**********************@weber.videotron.net. ..
Hi,

How can we compare sign of numbers in C++ ?

Bye,
Claude

Jul 22 '05 #26

P: n/a
"dwrayment" <dw*******@rogers.com> wrote in message news:<5a**********************@news04.bloor.is.net .cable.rogers.com>...
trying hard not to even look as this post anymore but new messages keep
coming up i guess i want to answer.

1. why restrict numbers to 32 bits

unless your planning on making your function a macro, your doing that
very thing anyway, as the parameter will likely be of type int.
the of course you might wanna make a second function for doubles etc.
you can however make it a macro if you want. in this case point taken
i would stlll choose my solution and make the (sacfrice) since most
number crunching is done with 32 bits anyway.


What about this? (works only for signed integer types using two's complement)

#include <limits>

template <typename T>
inline bool same_sign(const T &a,const T &b)
{
// possibly do something here with
// std::numeric_limits<T>::is_integer or
// std::numeric_limits<T>::is_signed

return ((a ^ b) & std::numeric_limits<T>::min()) == 0;
}

This seems to solve the problem of dependence on word size... or am I mistaken?
Jul 22 '05 #27

P: n/a
looks great iff numeric_limits<T>::min() is what i think it is 0x80000000,
etc.. but then you specify within the function must be two-compliment
so my only quam granted its a tiny one is design what appears to be a
generic function but really isnt. but as far as making it work for chars,
and shorts
and _int64's it looks great. the only other thing id wonder about is what
::min() actually does, is it sacrificing efficiency for generic. if
::min() does more work then multiping or whatever then it defeats the
purpose of using xor.


"Nate Barney" <na*********@vanderbilt.edu> wrote in message
news:9f*************************@posting.google.co m...
"dwrayment" <dw*******@rogers.com> wrote in message news:<5a**********************@news04.bloor.is.net .cable.rogers.com>...
trying hard not to even look as this post anymore but new messages keep
coming up i guess i want to answer.

1. why restrict numbers to 32 bits

unless your planning on making your function a macro, your doing that very thing anyway, as the parameter will likely be of type int.
the of course you might wanna make a second function for doubles etc.
you can however make it a macro if you want. in this case point taken
i would stlll choose my solution and make the (sacfrice) since most
number crunching is done with 32 bits anyway.


What about this? (works only for signed integer types using two's

complement)
#include <limits>

template <typename T>
inline bool same_sign(const T &a,const T &b)
{
// possibly do something here with
// std::numeric_limits<T>::is_integer or
// std::numeric_limits<T>::is_signed

return ((a ^ b) & std::numeric_limits<T>::min()) == 0;
}

This seems to solve the problem of dependence on word size... or am I

mistaken?
Jul 22 '05 #28

P: n/a
"dwrayment" <dw*******@rogers.com> wrote in message news:<wG*****************@news04.bloor.is.net.cabl e.rogers.com>...

"Nate Barney" <na*********@vanderbilt.edu> wrote in message
news:9f*************************@posting.google.co m...
What about this? (works only for signed integer types using two's
complement)

#include <limits>

template <typename T>
inline bool same_sign(const T &a,const T &b)
{
// possibly do something here with
// std::numeric_limits<T>::is_integer or
// std::numeric_limits<T>::is_signed

return ((a ^ b) & std::numeric_limits<T>::min()) == 0;
}

This seems to solve the problem of dependence on word size... or am
I mistaken?
looks great iff numeric_limits<T>::min() is what i think it is 0x80000000,


numeric_limits<int>::min() is 0x80000000 on 32-bit platforms, or at least
it is on my 32-bit platform (x86, linux, gcc)
etc.. but then you specify within the function must be two-compliment
I'm not certain of this at all, but I seem to recall that the C standard
actually stipulates that signed integers are represented using two's
complement. I could be way off base, and I don't have a copy of the
standard... can anybody verify this one way or the other?
so my only quam granted its a tiny one is design what appears to be a
generic function but really isnt. but as far as making it work for chars,
and shorts
and _int64's it looks great. the only other thing id wonder about is what
::min() actually does, is it sacrificing efficiency for generic. if
::min() does more work then multiping or whatever then it defeats the
purpose of using xor.


It's my understanding that numeric_limits is usually implemented through the
use of template specialization, and as such, functions like min() can simply
return the appropriate value and quite probably are inlined as well. I
suppose it depends on the quality of the implementation though.
Jul 22 '05 #29

P: n/a

"Nate Barney" <na*********@vanderbilt.edu> wrote in message news:9f**************************@posting.google.c om...
I'm not certain of this at all, but I seem to recall that the C standard
actually stipulates that signed integers are represented using two's
complement.


No, the current C standard says that : two's complement, one's complement,
and signed-magnitude are the allowable representations. The 1990 C standard
(the one that applies to C++), doesn't go even that far. All it says is that the
positive values have to be the same as their unsigned counterparts (i.e., straight
binary).

Jul 22 '05 #30

P: n/a

"Nate Barney" <na*********@vanderbilt.edu> wrote in message
news:9f**************************@posting.google.c om...
"dwrayment" <dw*******@rogers.com> wrote in message

news:<wG*****************@news04.bloor.is.net.cabl e.rogers.com>...

"Nate Barney" <na*********@vanderbilt.edu> wrote in message
news:9f*************************@posting.google.co m...

etc.. but then you specify within the function must be two-compliment


I'm not certain of this at all, but I seem to recall that the C standard
actually stipulates that signed integers are represented using two's
complement. I could be way off base, and I don't have a copy of the
standard... can anybody verify this one way or the other?
so my only quam granted its a tiny one is design what appears to be a
generic function but really isnt. but as far as making it work for chars, and shorts
and _int64's it looks great. the only other thing id wonder about is what ::min() actually does, is it sacrificing efficiency for generic. if
::min() does more work then multiping or whatever then it defeats the
purpose of using xor.


what i mean by that is the template function would allow programmers to
arbirtaliy try using doubles, floats and god knows what else structs of any
kind
each of which would have undefined behaviour. aka what is
same_sign<double>( a, b) ?

as opposed to intergers not being two-compliment ints

Jul 22 '05 #31

P: n/a
"dwrayment" <dw*******@rogers.com> wrote in message news:<tX******************@news04.bloor.is.net.cab le.rogers.com>...

what i mean by that is the template function would allow programmers to
arbirtaliy try using doubles, floats and god knows what else structs of any
kind


Well, doubles, yes, arbitrary structs, no. This function would only
compile for types for which the appropriate ^, &, and == are defined.
So most structs/classes would be right out. For doubles, I suppose
you could either throw an exception based on
std::numeric_limits<T>::is_integer, or take a cue from the FAQ (or
even the Standard itself!) and just document the types for which this
function should work. I prefer the second approach.

On the other hand, if someone developed an integer class using two's
complement, overloaded the appropriate operators, and specialized
std::numeric_limits<T> for the class, it should work correctly with
same_sign.

This algorithm is clever, but perhaps too clever. If you wanted it to
work with doubles, or regardless of representation scheme of integers,
I think you'd have to resort to a more arithmetic approach.

Nate
Jul 22 '05 #32

P: n/a
i wouldnt want to work with doubles, thats why i would choose to go with
the simpler 32 bits only, and not bother with generic coding.
but its up to the guy asking what he wants to do.

"Nate Barney" <na*********@vanderbilt.edu> wrote in message
news:9f**************************@posting.google.c om...
"dwrayment" <dw*******@rogers.com> wrote in message

news:<tX******************@news04.bloor.is.net.cab le.rogers.com>...

what i mean by that is the template function would allow programmers to
arbirtaliy try using doubles, floats and god knows what else structs of any kind


Well, doubles, yes, arbitrary structs, no. This function would only
compile for types for which the appropriate ^, &, and == are defined.
So most structs/classes would be right out. For doubles, I suppose
you could either throw an exception based on
std::numeric_limits<T>::is_integer, or take a cue from the FAQ (or
even the Standard itself!) and just document the types for which this
function should work. I prefer the second approach.

On the other hand, if someone developed an integer class using two's
complement, overloaded the appropriate operators, and specialized
std::numeric_limits<T> for the class, it should work correctly with
same_sign.

This algorithm is clever, but perhaps too clever. If you wanted it to
work with doubles, or regardless of representation scheme of integers,
I think you'd have to resort to a more arithmetic approach.

Nate

Jul 22 '05 #33

P: n/a
"Claude Gagnon" <cl**********@videotron.ca> wrote in message news:<xg**********************@weber.videotron.net >...
Hi,

How can we compare sign of numbers in C++ ?

Bye,
Claude

I've been messing around with direct sign bit access for quick testing.
could be applied to floats or integer types too....

Seems to be stable on both BIG and LITTLE endian machines..
Seems to be quick too...
#if defined(WIN32) || defined(LINUX)
#pragma pack(push)

struct SignBit {
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 7; // padding
bool sign : 1; // sign bit access
};

#else // UNIX

struct SignBit {
bool sign : 1; // sign bit access
unsigned char : 7; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
};
#endif
union DblSignBit {
double dbl;
SignBit bit;
};

#if defined(WIN32)
#pragma pack(pop)
#endif
But still gives me the shivers.. :)

any good reasons why I shouldn't be doing this???
Jul 22 '05 #34

P: n/a

"EnTn" <en*****************@yahoo.co.uk> wrote in message news:1d*************************@posting.google.co m...
But still gives me the shivers.. :)


any good reasons why I shouldn't be doing this???


Yeah, despite your half hearted efforts, it's not portable in
the least. The packing of bit fields is highly implementation
defined. I've seen different compilers even on the same platform
using different packing orders.

Jul 22 '05 #35

P: n/a
> Yeah, despite your half hearted efforts, it's not portable in
the least. The packing of bit fields is highly implementation
defined. I've seen different compilers even on the same platform
using different packing orders.


ok, so specialise the bit field code for each compiler used..
Jul 22 '05 #36

P: n/a
> What about this? (works only for signed integer types using two's complement)

return ((a ^ b) & std::numeric_limits<T>::min()) == 0;

This seems to solve the problem of dependence on word size... or am I mistaken?


This is the same or worse in every respect than:
return (a < 0) ^ (b < 0);
Jul 22 '05 #37

P: n/a

"Old Wolf" <ol*****@inspire.net.nz> wrote in message news:84**************************@posting.google.c om...
What about this? (works only for signed integer types using two's complement)

return ((a ^ b) & std::numeric_limits<T>::min()) == 0;

This seems to solve the problem of dependence on word size... or am I mistaken?


This is the same or worse in every respect than:
return (a < 0) ^ (b < 0);


or even
a < 0 != b < 0

keeps the bool from being widened back to int :-)

Jul 22 '05 #38

This discussion thread is closed

Replies have been disabled for this discussion.