Hi,
How can we compare sign of numbers in C++ ?
Bye,
Claude 37 25287
"Claude Gagnon" <cl**********@videotron.ca> wrote in message news:xg**********************@weber.videotron.net. .. How can we compare sign of numbers in C++ ?
What do you consider the sign of zero to be?
inline int signof(int a) { return (a == 0) ? 0 : (a<0 ? -1 : 1); }
if(signof(-45) == signof(-4) ) ...
"Claude Gagnon" <cl**********@videotron.ca> wrote... How can we compare sign of numbers in C++ ?
Depending on what you need, something like
template<class T> bool signTheSame(T t1, T t2)
{
return t1 < 0 == t2 < 0;
}
Victor
Claude Gagnon wrote: ... How can we compare sign of numbers in C++ ? ...
What do you understand under "sign"? The traditional 'signum' function
(-1, 0, +1) for arithmetical value 'v' can be calculated in C++ as follows
int signum = (v > 0) - (v < 0);
which means that signs of values 'v1' and 'v2' can be compared in the
following manner
bool signs_equal = ((v1 > 0) - (v1 < 0)) == ((v2 > 0) - (v2 < 0));
--
Best regards,
Andrey Tarasevich
most significant bit is 1 if negative and number is signed
"Claude Gagnon" <cl**********@videotron.ca> wrote in message
news:xg**********************@weber.videotron.net. .. Hi,
How can we compare sign of numbers in C++ ?
Bye, Claude
"Claude Gagnon" <cl**********@videotron.ca> wrote in message news:<xg**********************@weber.videotron.net >... Hi,
How can we compare sign of numbers in C++ ?
Is this what you mean?
inline bool SameSign(int a, int b)
{
return (a < 0) == (b < 0);
}
or
template<class T>
inline bool SameSign(T a, T b)
{
return (a < 0) == (b < 0);
}
"dwrayment" <dw*******@rogers.com> wrote in message news:Iz*********************@news01.bloor.is.net.c able.rogers.com... most significant bit is 1 if negative and number is signed
How do you know what the MSB is? Easier to test against zero.
"Claude Gagnon" <cl**********@videotron.ca> a écrit ... Hi,
How can we compare sign of numbers in C++ ?
Numbers ?
You can try :
same_sign = ((a*b) > 0)
#define same_sign(a, b) (((a)*(b))>0)
You have to decide about the case of one or two among a and b are 0, and to
be carefull with float types.
Happy new year,
Pierre
a*b is inefficient
if you know your gonna be using say 32 bit numbers all the time then
same sign = (! (a xor b)) >> 31 if mbs(a) == msb(b) then the
negation of a xor b will be 1 at mbs.
or you can do some kind of bit comparsion with 0x8000 0000 to test status
of msb.
"dwrayment" <dw*******@rogers.com> wrote in message
news:Iz*********************@news01.bloor.is.net.c able.rogers.com... most significant bit is 1 if negative and number is signed
"Claude Gagnon" <cl**********@videotron.ca> wrote in message news:xg**********************@weber.videotron.net. .. Hi,
How can we compare sign of numbers in C++ ?
Bye, Claude
> a*b is inefficient
On what architecture? if you know your gonna be using say 32 bit numbers all the time then same sign = (! (a xor b)) >> 31 if mbs(a) == msb(b) then the negation of a xor b will be 1 at mbs.
Do you mean the same thing by "msb" and "mbs?" I thought mbs was a typo
at first, but you did it twice...
or you can do some kind of bit comparsion with 0x8000 0000 to test status of msb.
Without the space, right? And still (of course) assuming 32 bits.
"Jeff Schwab" <je******@comcast.net> wrote in message
news:V9********************@comcast.com... a*b is inefficient On what architecture?
on any architechure. multiplcation is much less efficient then adding and
bitops. if you know your gonna be using say 32 bit numbers all the time then same sign = (! (a xor b)) >> 31 if mbs(a) == msb(b) then the negation of a xor b will be 1 at mbs.
Do you mean the same thing by "msb" and "mbs?" I thought mbs was a typo at first, but you did it twice...
or you can do some kind of bit comparsion with 0x8000 0000 to test
status of msb.
yes of course i cant spell msb.
Without the space, right? And still (of course) assuming 32 bits.
naturally no space.
dwrayment wrote: > a*b is inefficient
On what architecture?
on any architechure. multiplcation is much less efficient then adding and bitops. ...
What is your source of information? Can you provide any evidence to
support this statement?
--
Best regards,
Andrey Tarasevich
i cant give you a direct source, as i dont keep track of books i read. it
is common sense that multipying requires more work and relative to adding
and bit ops is inefficent (by computers standards), so any book about
optimizing code probably has a section on this. that not to say dont ever
multipy as its still pretty dang quick by human standards, but if you can
do something without multipying do it.
"Andrey Tarasevich" <an**************@hotmail.com> wrote in message
news:Ee********************@comcast.com... dwrayment wrote: > a*b is inefficient
On what architecture? on any architechure. multiplcation is much less efficient then adding
and bitops. ...
What is your source of information? Can you provide any evidence to support this statement?
-- Best regards, Andrey Tarasevich
On Fri, 02 Jan 2004 06:53:47 +0000, dwrayment wrote:
[ Please don't top post, thanx, M4 ] "Andrey Tarasevich" <an**************@hotmail.com> wrote in message news:Ee********************@comcast.com... dwrayment wrote: >> > a*b is inefficient >> >> On what architecture? >> > on any architechure. multiplcation is much less efficient then adding and > bitops. > ...
What is your source of information? Can you provide any evidence to support this statement?
i cant give you a direct source, as i dont keep track of books i read. it is common sense that multipying requires more work and relative to adding and bit ops is inefficent (by computers standards), so any book about optimizing code probably has a section on this. that not to say dont ever multipy as its still pretty dang quick by human standards, but if you can do something without multipying do it.
On modern CPUs, multiply is a one clockcycle instruction and just as fast
as addition or bitwise manipulation. This has been true for a while now.
Besides, compilers are pretty good at optimizing. Shifting left by 4 bits
or multiplying by 16 translate to the same opcodes on any sane compiler.
The occasions are very rare where handoptimizing multiplication into
something else would give a better result than the compiler can give.
Even if it would make a difference, are you going to notice? Not many
programs nowadays on modern hardware (and my main server is a P90!) will
make you want to do this kind of optimization (which it isn't).
HTH,
M4
dwrayment wrote: i cant give you a direct source, as i dont keep track of books i read. it is common sense that multipying requires more work and relative to adding and bit ops is inefficent (by computers standards)...
"Common sense" has no place in performance measurements.
Indeed, multiplication has historically been faster than addition
in floating point operations.
Stop trying to use "common sense" and start actually measuring
these things.
-tom!
"dwrayment" <dw*******@rogers.com> wrote in message news:%z**********************@news04.bloor.is.net. cable.rogers.com... i cant give you a direct source, as i dont keep track of books i read. it is common sense that multipying requires more work and relative to adding and bit ops is inefficent (by computers standards), so any book about optimizing code probably has a section on this. that not to say dont ever multipy as its still pretty dang quick by human standards, but if you can do something without multipying do it.\
This hasn't been true since the 70's or earlier. Most processors now do integer
operations (multiply or bitwise) in the same amount of time. Multiply hasn't taken
longer since processors had to simulate it with additions.
"Martijn Lievaart" <m@remove.this.part.rtij.nl> wrote in message
news:pa****************************@remove.this.pa rt.rtij.nl... On Fri, 02 Jan 2004 06:53:47 +0000, dwrayment wrote:
[ Please don't top post, thanx, M4 ]
"Andrey Tarasevich" <an**************@hotmail.com> wrote in message news:Ee********************@comcast.com... dwrayment wrote: >> > a*b is inefficient >> >> On what architecture? >> > on any architechure. multiplcation is much less efficient then
adding and > bitops. > ...
What is your source of information? Can you provide any evidence to support this statement?
i cant give you a direct source, as i dont keep track of books i read.
it is common sense that multipying requires more work and relative to
adding and bit ops is inefficent (by computers standards), so any book about optimizing code probably has a section on this. that not to say dont
ever multipy as its still pretty dang quick by human standards, but if you
can do something without multipying do it.
On modern CPUs, multiply is a one clockcycle instruction and just as fast as addition or bitwise manipulation. This has been true for a while now.
Besides, compilers are pretty good at optimizing. Shifting left by 4 bits or multiplying by 16 translate to the same opcodes on any sane compiler. The occasions are very rare where handoptimizing multiplication into something else would give a better result than the compiler can give.
Even if it would make a difference, are you going to notice? Not many programs nowadays on modern hardware (and my main server is a P90!) will make you want to do this kind of optimization (which it isn't).
HTH, M4
faster processers are no excuse for lazy programming. if you can prgram
something to do the same task faster with less work a good programmer will
do so.
as for the replys below i suggest you try doing it yourself. and dont
multipy by a power of 2 as that is equavialent to bit shifting.
mult still is less efficiant than doing other ops.
dwrayment wrote: faster processers are no excuse for lazy programming. if you can prgram something to do the same task faster with less work a good programmer will do so.
as for the replys below i suggest you try doing it yourself. and dont multipy by a power of 2 as that is equavialent to bit shifting. mult still is less efficiant than doing other ops.
On some architectures. Can you name one?
dwrayment wrote: faster processers are no excuse for lazy programming. if you can prgram something to do the same task faster with less work a good programmer will do so.
LOL ok dude.
as for the replys below i suggest you try doing it yourself. and dont multipy by a power of 2 as that is equavialent to bit shifting. mult still is less efficiant than doing other ops.
You should probably compile some code yourself and look at the
generated (and optimized) output. You won't though, because you
are lazy and think you already know the answer. Then you should
look through the processor info book and see what the various
costs of doing operations are.
so sad,
-tom!
ok last post for me if some of you dont like it theres nothing more i can
say.
first off, im not trying to reinvent the multipy op by simulating it with
adding and bit shifts. the question posed was how can we compare
sign of numbers.
soultion 1 was a*b .... i posed a different solution simply !a xor b >>
31. and now i think about it id rather do
a xor b & 0x70000000 == 0. here multiplying is inefficient because doing a
simple xor and bitwise and is less work than doing a mulitpy (on even the
best of the best of machines).
of course if i were trying to reinvent the wheel and do mulipication by
using adding and bit shifts i wouldnt be able to match the work done by
todays
best engineers. im sure they doing an excellent job. thats all i can say.
"Claude Gagnon" <cl**********@videotron.ca> wrote in message
news:xg**********************@weber.videotron.net. .. Hi,
How can we compare sign of numbers in C++ ?
Bye, Claude
dwrayment wrote in
news:Mx********************@twister01.bloor.is.net .cable.rogers.com: ok last post for me if some of you dont like it theres nothing more i can say.
first off, im not trying to reinvent the multipy op by simulating it with adding and bit shifts. the question posed was how can we compare sign of numbers.
soultion 1 was a*b .... i posed a different solution simply !a xor b >> 31. and now i think about it id rather do a xor b & 0x70000000 == 0.
Well this is the *real* problem, all that bit-twideling and you've gone
and masked off the sign bit.
here multiplying is inefficient because doing a simple xor and bitwise and is less work than doing a mulitpy (on even the best of the best of machines).
Lets start by working with some correct code:
bool thing( int a, int b )
{
return (a * b) > 0;
}
We can write a truth table (N < 0 and P > 0):
a, b, return
------------
0, 0, false
0, P, false
0, N, false
P, 0, false
P, N, false
P, P, true
N, 0, false
N, P, false
N, N, true
Assuming we have 32 bit 2's compliment machine that is:
bool thing( int a, int b )
{
return (a || b) && ((a >> 31) == (b >> 31));
}
or (if you prefer):
bool thing( int a, int b )
{
return
(a || b)
&&
0 == ( ( unsigned(a) ^ unsigned(b) ) & 0x80000000U )
;
}
Assuming (again) that either of the above is faster than (a * b) < 0,
why wouldn't an optimizing C++ compiler make the change.
The reason others have objected to your assertion that "bit-twideling
is faster", is that not only can the compiler do the job for you, so
can modern CPU's and when the CPU does it you get a double hit as
the generated code is shorter too.
Rob.
-- http://www.victim-prime.dsl.pipex.com/
my apologies it is a xor b & 0x80000000. typo
"Rob Williscroft" <rt*@freenet.REMOVE.co.uk> wrote in message
news:Xn**********************************@195.129. 110.130... dwrayment wrote in news:Mx********************@twister01.bloor.is.net .cable.rogers.com:
ok last post for me if some of you dont like it theres nothing more i can say.
first off, im not trying to reinvent the multipy op by simulating it with adding and bit shifts. the question posed was how can we compare sign of numbers.
soultion 1 was a*b .... i posed a different solution simply !a xor b >> 31. and now i think about it id rather do a xor b & 0x70000000 == 0.
Well this is the *real* problem, all that bit-twideling and you've gone and masked off the sign bit.
here multiplying is inefficient because doing a simple xor and bitwise and is less work than doing a mulitpy (on even the best of the best of machines).
Lets start by working with some correct code:
bool thing( int a, int b ) { return (a * b) > 0; }
We can write a truth table (N < 0 and P > 0):
a, b, return ------------ 0, 0, false 0, P, false 0, N, false P, 0, false P, N, false P, P, true N, 0, false N, P, false N, N, true
Assuming we have 32 bit 2's compliment machine that is:
bool thing( int a, int b ) { return (a || b) && ((a >> 31) == (b >> 31)); }
or (if you prefer):
bool thing( int a, int b ) { return (a || b) && 0 == ( ( unsigned(a) ^ unsigned(b) ) & 0x80000000U ) ; }
Assuming (again) that either of the above is faster than (a * b) < 0, why wouldn't an optimizing C++ compiler make the change.
The reason others have objected to your assertion that "bit-twideling is faster", is that not only can the compiler do the job for you, so can modern CPU's and when the CPU does it you get a double hit as the generated code is shorter too.
Rob. -- http://www.victim-prime.dsl.pipex.com/
dwrayment wrote: my apologies it is a xor b & 0x80000000. typo
Why would you work at the bit level, tying yourself to a particular size
of integer, just to figure out whether two numbers (a and b) are greater
than a third number (zero)? C++ has built-in operators for this.
-Jeff
PS- Please stop top-posting.
"Rob Williscroft" <rt*@freenet.REMOVE.co.uk> wrote in message news:Xn**********************************@195.129. 110.130...
dwrayment wrote in news:Mx********************@twister01.bloor.is.n et.cable.rogers.com:
ok last post for me if some of you dont like it theres nothing more i can say.
first off, im not trying to reinvent the multipy op by simulating it with adding and bit shifts. the question posed was how can we compare sign of numbers.
soultion 1 was a*b .... i posed a different solution simply !a xor b >> 31. and now i think about it id rather do a xor b & 0x70000000 == 0.
Well this is the *real* problem, all that bit-twideling and you've gone and masked off the sign bit.
here multiplying is inefficient because doing a simple xor and bitwise and is less work than doing a mulitpy (on even the best of the best of machines).
Lets start by working with some correct code:
bool thing( int a, int b ) { return (a * b) > 0; }
We can write a truth table (N < 0 and P > 0):
a, b, return ------------ 0, 0, false 0, P, false 0, N, false P, 0, false P, N, false P, P, true N, 0, false N, P, false N, N, true
Assuming we have 32 bit 2's compliment machine that is:
bool thing( int a, int b ) { return (a || b) && ((a >> 31) == (b >> 31)); }
or (if you prefer):
bool thing( int a, int b ) { return (a || b) && 0 == ( ( unsigned(a) ^ unsigned(b) ) & 0x80000000U ) ; }
Assuming (again) that either of the above is faster than (a * b) < 0, why wouldn't an optimizing C++ compiler make the change.
The reason others have objected to your assertion that "bit-twideling is faster", is that not only can the compiler do the job for you, so can modern CPU's and when the CPU does it you get a double hit as the generated code is shorter too.
Rob. -- http://www.victim-prime.dsl.pipex.com/
> Lets start by working with some correct code: bool thing( int a, int b ) { return (a * b) > 0; }
What about overflow?
heinz baer wrote in news:5a**************************@posting.google.c om: Lets start by working with some correct code:
bool thing( int a, int b ) { return (a * b) > 0; }
What about overflow?
This was the starting point and thus is correct by defenition (it is
the defenition). I used meaningless name "thing" as I don't think its
a good implementation of any "Compare Sign" function.
But you're right my bit twideling alternatives didn't attempt to take
account of (or emmulate) thing()'s overflow behaviour. I may be wrong
but IIUC the result of calling thing( a, b ) when a * b would overflow
would be undefined (or maybe implementation defined) behaviour anyway.
Rob.
-- http://www.victim-prime.dsl.pipex.com/
trying hard not to even look as this post anymore but new messages keep
coming up i guess i want to answer.
1. why restrict numbers to 32 bits
unless your planning on making your function a macro, your doing that
very thing anyway, as the parameter will likely be of type int.
the of course you might wanna make a second function for doubles etc.
you can however make it a macro if you want. in this case point taken
i would stlll choose my solution and make the (sacfrice) since most
number crunching is done with 32 bits anyway.
2. overflow
a xor b & 0x80000000 == 0 has no overflow and does not require checking
for such a thing as it is not a * b. the numbers do not grow larger by
any factor.
3. there no need to case a and b to unsigned as one gentlman did.
hooping this is the last last reply i need to do. hoping the orginal
sender got whatever solution he wants! the rest is all hogwash really.
"Claude Gagnon" <cl**********@videotron.ca> wrote in message
news:xg**********************@weber.videotron.net. .. Hi,
How can we compare sign of numbers in C++ ?
Bye, Claude
"dwrayment" <dw*******@rogers.com> wrote in message news:<5a**********************@news04.bloor.is.net .cable.rogers.com>... trying hard not to even look as this post anymore but new messages keep coming up i guess i want to answer.
1. why restrict numbers to 32 bits
unless your planning on making your function a macro, your doing that very thing anyway, as the parameter will likely be of type int. the of course you might wanna make a second function for doubles etc. you can however make it a macro if you want. in this case point taken i would stlll choose my solution and make the (sacfrice) since most number crunching is done with 32 bits anyway.
What about this? (works only for signed integer types using two's complement)
#include <limits>
template <typename T>
inline bool same_sign(const T &a,const T &b)
{
// possibly do something here with
// std::numeric_limits<T>::is_integer or
// std::numeric_limits<T>::is_signed
return ((a ^ b) & std::numeric_limits<T>::min()) == 0;
}
This seems to solve the problem of dependence on word size... or am I mistaken?
looks great iff numeric_limits<T>::min() is what i think it is 0x80000000,
etc.. but then you specify within the function must be two-compliment
so my only quam granted its a tiny one is design what appears to be a
generic function but really isnt. but as far as making it work for chars,
and shorts
and _int64's it looks great. the only other thing id wonder about is what
::min() actually does, is it sacrificing efficiency for generic. if
::min() does more work then multiping or whatever then it defeats the
purpose of using xor.
"Nate Barney" <na*********@vanderbilt.edu> wrote in message
news:9f*************************@posting.google.co m... "dwrayment" <dw*******@rogers.com> wrote in message
news:<5a**********************@news04.bloor.is.net .cable.rogers.com>... trying hard not to even look as this post anymore but new messages keep coming up i guess i want to answer.
1. why restrict numbers to 32 bits
unless your planning on making your function a macro, your doing
that very thing anyway, as the parameter will likely be of type int. the of course you might wanna make a second function for doubles etc. you can however make it a macro if you want. in this case point taken i would stlll choose my solution and make the (sacfrice) since most number crunching is done with 32 bits anyway.
What about this? (works only for signed integer types using two's
complement) #include <limits>
template <typename T> inline bool same_sign(const T &a,const T &b) { // possibly do something here with // std::numeric_limits<T>::is_integer or // std::numeric_limits<T>::is_signed
return ((a ^ b) & std::numeric_limits<T>::min()) == 0; }
This seems to solve the problem of dependence on word size... or am I
mistaken?
"dwrayment" <dw*******@rogers.com> wrote in message news:<wG*****************@news04.bloor.is.net.cabl e.rogers.com>... "Nate Barney" <na*********@vanderbilt.edu> wrote in message news:9f*************************@posting.google.co m...
What about this? (works only for signed integer types using two's complement)
#include <limits>
template <typename T> inline bool same_sign(const T &a,const T &b) { // possibly do something here with // std::numeric_limits<T>::is_integer or // std::numeric_limits<T>::is_signed
return ((a ^ b) & std::numeric_limits<T>::min()) == 0; }
This seems to solve the problem of dependence on word size... or am I mistaken? looks great iff numeric_limits<T>::min() is what i think it is 0x80000000,
numeric_limits<int>::min() is 0x80000000 on 32-bit platforms, or at least
it is on my 32-bit platform (x86, linux, gcc)
etc.. but then you specify within the function must be two-compliment
I'm not certain of this at all, but I seem to recall that the C standard
actually stipulates that signed integers are represented using two's
complement. I could be way off base, and I don't have a copy of the
standard... can anybody verify this one way or the other?
so my only quam granted its a tiny one is design what appears to be a generic function but really isnt. but as far as making it work for chars, and shorts and _int64's it looks great. the only other thing id wonder about is what ::min() actually does, is it sacrificing efficiency for generic. if ::min() does more work then multiping or whatever then it defeats the purpose of using xor.
It's my understanding that numeric_limits is usually implemented through the
use of template specialization, and as such, functions like min() can simply
return the appropriate value and quite probably are inlined as well. I
suppose it depends on the quality of the implementation though.
"Nate Barney" <na*********@vanderbilt.edu> wrote in message news:9f**************************@posting.google.c om... I'm not certain of this at all, but I seem to recall that the C standard actually stipulates that signed integers are represented using two's complement.
No, the current C standard says that : two's complement, one's complement,
and signed-magnitude are the allowable representations. The 1990 C standard
(the one that applies to C++), doesn't go even that far. All it says is that the
positive values have to be the same as their unsigned counterparts (i.e., straight
binary).
"Nate Barney" <na*********@vanderbilt.edu> wrote in message
news:9f**************************@posting.google.c om... "dwrayment" <dw*******@rogers.com> wrote in message
news:<wG*****************@news04.bloor.is.net.cabl e.rogers.com>... "Nate Barney" <na*********@vanderbilt.edu> wrote in message news:9f*************************@posting.google.co m...
etc.. but then you specify within the function must be two-compliment
I'm not certain of this at all, but I seem to recall that the C standard actually stipulates that signed integers are represented using two's complement. I could be way off base, and I don't have a copy of the standard... can anybody verify this one way or the other?
so my only quam granted its a tiny one is design what appears to be a generic function but really isnt. but as far as making it work for
chars, and shorts and _int64's it looks great. the only other thing id wonder about is
what ::min() actually does, is it sacrificing efficiency for generic. if ::min() does more work then multiping or whatever then it defeats the purpose of using xor.
what i mean by that is the template function would allow programmers to
arbirtaliy try using doubles, floats and god knows what else structs of any
kind
each of which would have undefined behaviour. aka what is
same_sign<double>( a, b) ?
as opposed to intergers not being two-compliment ints
"dwrayment" <dw*******@rogers.com> wrote in message news:<tX******************@news04.bloor.is.net.cab le.rogers.com>... what i mean by that is the template function would allow programmers to arbirtaliy try using doubles, floats and god knows what else structs of any kind
Well, doubles, yes, arbitrary structs, no. This function would only
compile for types for which the appropriate ^, &, and == are defined.
So most structs/classes would be right out. For doubles, I suppose
you could either throw an exception based on
std::numeric_limits<T>::is_integer, or take a cue from the FAQ (or
even the Standard itself!) and just document the types for which this
function should work. I prefer the second approach.
On the other hand, if someone developed an integer class using two's
complement, overloaded the appropriate operators, and specialized
std::numeric_limits<T> for the class, it should work correctly with
same_sign.
This algorithm is clever, but perhaps too clever. If you wanted it to
work with doubles, or regardless of representation scheme of integers,
I think you'd have to resort to a more arithmetic approach.
Nate
i wouldnt want to work with doubles, thats why i would choose to go with
the simpler 32 bits only, and not bother with generic coding.
but its up to the guy asking what he wants to do.
"Nate Barney" <na*********@vanderbilt.edu> wrote in message
news:9f**************************@posting.google.c om... "dwrayment" <dw*******@rogers.com> wrote in message
news:<tX******************@news04.bloor.is.net.cab le.rogers.com>... what i mean by that is the template function would allow programmers to arbirtaliy try using doubles, floats and god knows what else structs of
any kind
Well, doubles, yes, arbitrary structs, no. This function would only compile for types for which the appropriate ^, &, and == are defined. So most structs/classes would be right out. For doubles, I suppose you could either throw an exception based on std::numeric_limits<T>::is_integer, or take a cue from the FAQ (or even the Standard itself!) and just document the types for which this function should work. I prefer the second approach.
On the other hand, if someone developed an integer class using two's complement, overloaded the appropriate operators, and specialized std::numeric_limits<T> for the class, it should work correctly with same_sign.
This algorithm is clever, but perhaps too clever. If you wanted it to work with doubles, or regardless of representation scheme of integers, I think you'd have to resort to a more arithmetic approach.
Nate
"Claude Gagnon" <cl**********@videotron.ca> wrote in message news:<xg**********************@weber.videotron.net >... Hi,
How can we compare sign of numbers in C++ ?
Bye, Claude
I've been messing around with direct sign bit access for quick testing.
could be applied to floats or integer types too....
Seems to be stable on both BIG and LITTLE endian machines..
Seems to be quick too...
#if defined(WIN32) || defined(LINUX)
#pragma pack(push)
struct SignBit {
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 7; // padding
bool sign : 1; // sign bit access
};
#else // UNIX
struct SignBit {
bool sign : 1; // sign bit access
unsigned char : 7; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
unsigned char : 8; // padding
};
#endif
union DblSignBit {
double dbl;
SignBit bit;
};
#if defined(WIN32)
#pragma pack(pop)
#endif
But still gives me the shivers.. :)
any good reasons why I shouldn't be doing this???
"EnTn" <en*****************@yahoo.co.uk> wrote in message news:1d*************************@posting.google.co m... But still gives me the shivers.. :)
any good reasons why I shouldn't be doing this???
Yeah, despite your half hearted efforts, it's not portable in
the least. The packing of bit fields is highly implementation
defined. I've seen different compilers even on the same platform
using different packing orders.
> Yeah, despite your half hearted efforts, it's not portable in the least. The packing of bit fields is highly implementation defined. I've seen different compilers even on the same platform using different packing orders.
ok, so specialise the bit field code for each compiler used..
> What about this? (works only for signed integer types using two's complement) return ((a ^ b) & std::numeric_limits<T>::min()) == 0;
This seems to solve the problem of dependence on word size... or am I mistaken?
This is the same or worse in every respect than:
return (a < 0) ^ (b < 0);
"Old Wolf" <ol*****@inspire.net.nz> wrote in message news:84**************************@posting.google.c om... What about this? (works only for signed integer types using two's complement)
return ((a ^ b) & std::numeric_limits<T>::min()) == 0;
This seems to solve the problem of dependence on word size... or am I mistaken?
This is the same or worse in every respect than: return (a < 0) ^ (b < 0);
or even
a < 0 != b < 0
keeps the bool from being widened back to int :-) This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: alexis |
last post by:
Hi,
In a form I have the curent date
<input name="datetoday" type="hidden" value="<? echo date("d/m/Y"); ?>">
and
<input type=text name="datebox" size=15>
The date format is d/m/Y...
|
by: tarmat |
last post by:
sorry for this silly little question, but whats the function to grab
the sign of a value?
|
by: Sona |
last post by:
Hi,
Could someone please explain what sign-extension means? If I have a hex
number 0x55, how does this get sign-extended? Can a sign-extended
counterpart be equal to -91? In a program I'm...
|
by: raju |
last post by:
hi
can we compare two integers without using relational operators (== != <
<= > >=)
thanks
rajesh s
|
by: Steve K. |
last post by:
I recall a few months ago coming across an article allowing for encoding
(or converting?) xml and html documents into sign language as well as
brail for deaf and blind people, and that they were...
|
by: Marshall Dudley |
last post by:
I am comparing two numbers, and the comparison gives absolutely bizzare
results. I am comparing 5 with a number that is entered in a form. Then
if the number entered is 5, or between 50 and 95, it...
|
by: Terry Chapman |
last post by:
I have a 2003 Access .mda add-in which I can sign using my Thawte digital
certificate.
I am now migrating this add-in to Access 2007 and when I try to add my
Thawte digital signature Access 2007...
|
by: S S |
last post by:
Hi
I have a requirement where I am declaring a map within a class.
class abc {
map <void*, void*mMap; // I do not pass compare struct here.
....
};
Here I am not passing compare function,...
|
by: Nick |
last post by:
I've seen a few frameworks use the following:
function $(id) { return document.getElementById(id); }
Then to use:
$('something').innerHTML = 'blah';
I'm just trying to roll this out to my...
|
by: ryjfgjl |
last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
|
by: emmanuelkatto |
last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud.
Please let me know.
Thanks!
Emmanuel
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Sonnysonu |
last post by:
This is the data of csv file
1 2 3
1 2 3
1 2 3
1 2 3
2 3
2 3
3
the lengths should be different i have to store the data by column-wise with in the specific length.
suppose the i have to...
|
by: marktang |
last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
|
by: Oralloy |
last post by:
Hello folks,
I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>".
The problem is that using the GNU compilers,...
|
by: Hystou |
last post by:
Overview:
Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
| |