By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,983 Members | 1,627 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,983 IT Pros & Developers. It's quick & easy.

Why is minus one (-1) equal to true in VB again?

P: n/a
I coulda sworn I was given an explanation during an AppDev class years
ago for VB6, but don't recall the answer. Why is it that -1 is True
in Visual Basic (and now VB.NET)? Bit flags seem like they should
always be 0 or 1 to me... (not that I haven't used VB long enough by
now to know better).

Sorry to pester, but "why is -1 = true?" is a difficult thing to
Google!

Ruffin Bailey
Nov 20 '05 #1
Share this Question
Share on Google+
33 Replies


P: n/a
HI Ruffin,

Why start counting in programlanguages with zero and in normal counting with
one. Just because it is terrible difficult to change those things I assume
and keep it backwards compatible.

(I think that when the counting would start at one it was also easier to set
the false to zero).

Just my thought.

Cor
I coulda sworn I was given an explanation during an AppDev class years
ago for VB6, but don't recall the answer. Why is it that -1 is True
in Visual Basic (and now VB.NET)? Bit flags seem like they should
always be 0 or 1 to me... (not that I haven't used VB long enough by
now to know better).

Sorry to pester, but "why is -1 = true?" is a difficult thing to
Google!

Ruffin Bailey

Nov 20 '05 #2

P: n/a
I don't have an answer for you, but I do have an opinion. And that is that
you should turn option strict on, and that way something like
If -1 = True Then

Would never even compile. You shouldn't be coding relying on late binding to
turn integers into booleans, etc.

"Ruffin Bailey" <ka****@mailinator.com> wrote in message
news:fd**************************@posting.google.c om...
I coulda sworn I was given an explanation during an AppDev class years
ago for VB6, but don't recall the answer. Why is it that -1 is True
in Visual Basic (and now VB.NET)? Bit flags seem like they should
always be 0 or 1 to me... (not that I haven't used VB long enough by
now to know better).

Sorry to pester, but "why is -1 = true?" is a difficult thing to
Google!

Ruffin Bailey

Nov 20 '05 #3

P: n/a
* ka****@mailinator.com (Ruffin Bailey) scripsit:
I coulda sworn I was given an explanation during an AppDev class years
ago for VB6, but don't recall the answer. Why is it that -1 is True
in Visual Basic (and now VB.NET)? Bit flags seem like they should
always be 0 or 1 to me... (not that I haven't used VB long enough by
now to know better).


Booleans are stored as 32-bit-integers.

'False' = 000000....000 (BIN) = 0 (DEC)
'True' = 111111....111 (BIN) = -1 (DEC)

The first bit is the sign bit, if it's set to 1 that indicates a
negative number.

--
Herfried K. Wagner [MVP]
<URL:http://dotnet.mvps.org/>
Nov 20 '05 #4

P: n/a
Hi Herfried,

This is how it is done for a boolean you need only one bulb which can be on
and off.
(or one bit in a computer)

:-)

Cor
Booleans are stored as 32-bit-integers.

'False' = 000000....000 (BIN) = 0 (DEC)
'True' = 111111....111 (BIN) = -1 (DEC)

The first bit is the sign bit, if it's set to 1 that indicates a
negative number.

Nov 20 '05 #5

P: n/a
* "Cor Ligthert" <no**********@planet.nl> scripsit:
This is how it is done for a boolean you need only one bulb which can be on
and off.
(or one bit in a computer)

:-)


Sure, but we are running a 32 bit computer, that's why a 'Boolean' is 32
bits. Building the complement of all bits is a very easy operation, so
there is no need to play around with a single bit.

--
Herfried K. Wagner [MVP]
<URL:http://dotnet.mvps.org/>
Nov 20 '05 #6

P: n/a
booleans have as far as I now always been defined as

FALSE = 0
TRUE = !FALSE

so it could be -1,27 1, or any other arbitrary value when stored in a field
larger than 1 bit.

Greg

"Ruffin Bailey" <ka****@mailinator.com> wrote in message
news:fd**************************@posting.google.c om...
I coulda sworn I was given an explanation during an AppDev class years
ago for VB6, but don't recall the answer. Why is it that -1 is True
in Visual Basic (and now VB.NET)? Bit flags seem like they should
always be 0 or 1 to me... (not that I haven't used VB long enough by
now to know better).

Sorry to pester, but "why is -1 = true?" is a difficult thing to
Google!

Ruffin Bailey

Nov 20 '05 #7

P: n/a
wow, this one has a long history. let's see if I can recall the explanation from
my machine language days...
It basically comes down to a number of things, many historical.
In certain situations in Boolean math (also two's complement), the setting or
clearing of all bits would represent an absolute True or False.
Various hardware, processors, etc. have differing native digits of Integer
precision. At the time, it was 4, 8, or 16. Obviously this has changed with
time, and will no doubt continue to change.
Cross platform code works fastest when the values can be manipulated in the
native precision of the processor registers.
There is no standard 1 Bit data type. Especially in VB.
For normal signed integers, the high bit evaluates as the Sign of the integer.
IF you did have a 1 Bit signed integer type, if the value was not 0, it would
be -1 (or -0, but that's a discussion for another day)
If you carry that over to integers of any arbitrary precision, it would make
sense to only need to look at a common register flag. In this case, the Sign
bit. If you needed to convert native integer precision across platforms, this
would require unnecessary casting of values, which would be inefficient.
Besides, casting a non zero 1 Bit integer to any other integer would still
result in -1. Same is the case if you cast a -1 Int32 to an Int64, still = -1.
Note I am talking about Casting, not a Bit-wise compare.
Additionally, by storing the Boolean value as all 1's or all 0's, you get an
additional performance gain when dealing with IO, as you can ignore the Byte
Ordering (Little/Middle/Big Endean). 1111 = 1111 forward or backward.
A Boolean evaluation of an expression is actually a double negative.
If an expression does not evaluate to False(0), then it must be True. So all non
zero values are True.
Basically, given VAR=1 then the statement
"IF VAR THEN"
would really be more like
"IF (VAR <> 0) [returns True {-1}] THEN"
Low level language programmers used to take advantage of this to save a couple
chars in the source when you only had a few KB of space if you were lucky.

Since there is no Bit data type, you can't pass a Bit value as a parameter into
or out of a function, it would still need to be cast to the smallest native data
type supported.
Since VB was designed for 32 bit W/Intel systems, it only makes sense to use the
native data type of the processor for the sake of speed, memory storage, etc. In
this case, that happens to be a 32 Bit Signed Integer. Note that even though
there is a Byte data type, when it is passed to the registers it still goes in
as an Int32.

Sorry if that explanation meandered a little. But hopefully it helps to explain
the "why".

Gerald

"Ruffin Bailey" <ka****@mailinator.com> wrote in message
news:fd**************************@posting.google.c om...
I coulda sworn I was given an explanation during an AppDev class years
ago for VB6, but don't recall the answer. Why is it that -1 is True
in Visual Basic (and now VB.NET)? Bit flags seem like they should
always be 0 or 1 to me... (not that I haven't used VB long enough by
now to know better).

Sorry to pester, but "why is -1 = true?" is a difficult thing to
Google!

Ruffin Bailey

Nov 20 '05 #8

P: n/a
Hi Herfried,

You do not believe it, ;-) I was already ready for that answer from you,
however that it is technical more easy to read a whole 32 bit word in a
register than a single bit and with that set the bulb to on, does not answer
why it is logical -1.

:-)

Cor
This is how it is done for a boolean you need only one bulb which can be on and off.
(or one bit in a computer)

:-)


Sure, but we are running a 32 bit computer, that's why a 'Boolean' is 32
bits. Building the complement of all bits is a very easy operation, so
there is no need to play around with a single bit.

Nov 20 '05 #9

P: n/a
In the fervor of getting into the mechanics of it all, the simplicity of your
explanation eluded me.
Spot On!

Gerald

"Greg Young" <gr********@planetbeach.com> wrote in message
news:uG**************@TK2MSFTNGP09.phx.gbl...
booleans have as far as I now always been defined as

FALSE = 0
TRUE = !FALSE

so it could be -1,27 1, or any other arbitrary value when stored in a field
larger than 1 bit.

Greg

"Ruffin Bailey" <ka****@mailinator.com> wrote in message
news:fd**************************@posting.google.c om...
I coulda sworn I was given an explanation during an AppDev class years
ago for VB6, but don't recall the answer. Why is it that -1 is True
in Visual Basic (and now VB.NET)? Bit flags seem like they should
always be 0 or 1 to me... (not that I haven't used VB long enough by
now to know better).

Sorry to pester, but "why is -1 = true?" is a difficult thing to
Google!

Ruffin Bailey


Nov 20 '05 #10

P: n/a
Acording to the help file . . .

Boolean variables are stored as 16-bit (2-byte) numbers

HTH

--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Herfried K. Wagner [MVP]" <hi***************@gmx.at> wrote in message
news:2j*************@uni-berlin.de...
* ka****@mailinator.com (Ruffin Bailey) scripsit:
I coulda sworn I was given an explanation during an AppDev class years
ago for VB6, but don't recall the answer. Why is it that -1 is True
in Visual Basic (and now VB.NET)? Bit flags seem like they should
always be 0 or 1 to me... (not that I haven't used VB long enough by
now to know better).


Booleans are stored as 32-bit-integers.

'False' = 000000....000 (BIN) = 0 (DEC)
'True' = 111111....111 (BIN) = -1 (DEC)

The first bit is the sign bit, if it's set to 1 that indicates a
negative number.

--
Herfried K. Wagner [MVP]
<URL:http://dotnet.mvps.org/>

Nov 20 '05 #11

P: n/a
* "One Handed Man \( OHM - Terry Burns \)" <news.microsoft.com> scripsit:
Acording to the help file . . .

Boolean variables are stored as 16-bit (2-byte) numbers


You are right ;-).

--
Herfried K. Wagner [MVP]
<URL:http://dotnet.mvps.org/>
Nov 20 '05 #12

P: n/a
This code disassembly appears to store the value '0' for False, if you
assign True to 'v' then the ldc.i4 line becomes ldc.i4.1.

So it would appear that in actual fact the binary storage for this type is
actually False=0 and True=1.

This seems to be converted differently when expressing as numeric types.

// Code size 5 (0x5)
.maxstack 1
.locals init ([0] bool v)
IL_0000: nop
IL_0001: ldc.i4.0
IL_0002: stloc.0
IL_0003: nop
IL_0004: ret
} // end of method Form1::Button1_Click

--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Herfried K. Wagner [MVP]" <hi***************@gmx.at> wrote in message
news:2j*************@uni-berlin.de...
* "One Handed Man \( OHM - Terry Burns \)" <news.microsoft.com> scripsit:
Acording to the help file . . .

Boolean variables are stored as 16-bit (2-byte) numbers


You are right ;-).

--
Herfried K. Wagner [MVP]
<URL:http://dotnet.mvps.org/>

Nov 20 '05 #13

P: n/a
Think about it...when you use numbers in VB w/o conversion, what data type is 1?
Integer. So, how many bytes does an integer use? In this case, you have an
integer value so converting that to binary as Herfried described now, because of
the compliment, also turns on the sign bit. Understand???

Mythran

Hi Herfried,

This is how it is done for a boolean you need only one bulb which can be on
and off.
(or one bit in a computer)

:-)

Cor
Booleans are stored as 32-bit-integers.

'False' = 000000....000 (BIN) = 0 (DEC)
'True' = 111111....111 (BIN) = -1 (DEC)

The first bit is the sign bit, if it's set to 1 that indicates a
negative number.


Nov 20 '05 #14

P: n/a
This code disassembly appears to store the value '0' for False, if you
assign True to 'v' then the ldc.i4 line becomes ldc.i4.1.

So it would appear that in actual fact the binary storage for this type is
actually False=0 and True=1.This code disassembly appears to store the
value '0' for False, if you

This seems to be converted differently when expressing as numeric types.

// Code size 5 (0x5)
.maxstack 1
.locals init ([0] bool v)
IL_0000: nop
IL_0001: ldc.i4.0
IL_0002: stloc.0
IL_0003: nop
IL_0004: ret
} // end of method Form1::Button1_Click

--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Mythran" <ki********@hotmail.com> wrote in message
news:ex**************@tk2msftngp13.phx.gbl...
Think about it...when you use numbers in VB w/o conversion, what data type is 1? Integer. So, how many bytes does an integer use? In this case, you have an integer value so converting that to binary as Herfried described now, because of the compliment, also turns on the sign bit. Understand???

Mythran

Hi Herfried,

This is how it is done for a boolean you need only one bulb which can be on and off.
(or one bit in a computer)

:-)

Cor
Booleans are stored as 32-bit-integers.

'False' = 000000....000 (BIN) = 0 (DEC)
'True' = 111111....111 (BIN) = -1 (DEC)

The first bit is the sign bit, if it's set to 1 that indicates a
negative number.



Nov 20 '05 #15

P: n/a
Hi OHM,

The correct answer is in my opinion given by Greg Young.

What you than take as value is not important as long as true not is 0.

Cor
Nov 20 '05 #16

P: n/a
* "One Handed Man \( OHM - Terry Burns \)" <news.microsoft.com> scripsit:
This code disassembly appears to store the value '0' for False, if you
assign True to 'v' then the ldc.i4 line becomes ldc.i4.1.

So it would appear that in actual fact the binary storage for this type is
actually False=0 and True=1.

This seems to be converted differently when expressing as numeric types.

// Code size 5 (0x5)
.maxstack 1
.locals init ([0] bool v)
IL_0000: nop
IL_0001: ldc.i4.0
IL_0002: stloc.0
IL_0003: nop
IL_0004: ret
} // end of method Form1::Button1_Click


Sure, 'False' is always 0, and 'True' is something <> 0. When doing a
'Not' on a 'Boolean' set to 'False' it's -1 because that's the complement of 0
interpreted as a signed short.

--
Herfried K. Wagner [MVP]
<URL:http://dotnet.mvps.org/>
Nov 20 '05 #17

P: n/a
My point was that true is not stored as -1, rather that it is stored as 1.
Which is different to what every one was saying. It is only when you convert
it that it to a numeric type that it becomes Minus 1.

--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Cor Ligthert" <no**********@planet.nl> wrote in message
news:Oh**************@TK2MSFTNGP12.phx.gbl...
Hi OHM,

The correct answer is in my opinion given by Greg Young.

What you than take as value is not important as long as true not is 0.

Cor

Nov 20 '05 #18

P: n/a
* "One Handed Man \( OHM - Terry Burns \)" <news.microsoft.com> scripsit:
My point was that true is not stored as -1, rather that it is stored
as 1.


It's stored as -1 if you do a 'Not False'.

--
Herfried K. Wagner [MVP]
<URL:http://dotnet.mvps.org/>
Nov 20 '05 #19

P: n/a

"Ruffin Bailey" <ka****@mailinator.com> wrote
I coulda sworn I was given an explanation during an AppDev class years
ago for VB6, but don't recall the answer. Why is it that -1 is True
in Visual Basic (and now VB.NET)?

Because the logical operators were always bit-wise operators.

That meant And, Or, Xor, and Not, could be used on numbers
as well as boolean conditional expressions.
7 And 3 = 3

if (2>0) And (4>2) then ...
It was an industry standard that False equaled 0, but what
value would satisfy these statements;

? = Not 0

0 = Not ?
The Not operator is just a bit-wise complement operator,
as it always had been, but because of that, there is only
one value that can be plugged into the statements above.

0 is a number where all bits are set to 0, so that means
its complement is a value whose bits are all set to 1.
In 2's complement notation (the format used to store signed
values in memory) a value with all its bits set to 1 equates
to -1.

LFS
Nov 20 '05 #20

P: n/a
No, it is stored as a +1 one the stack.

--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Herfried K. Wagner [MVP]" <hi***************@gmx.at> wrote in message
news:2j*************@uni-berlin.de...
* "One Handed Man \( OHM - Terry Burns \)" <news.microsoft.com> scripsit:
My point was that true is not stored as -1, rather that it is stored
as 1.


It's stored as -1 if you do a 'Not False'.

--
Herfried K. Wagner [MVP]
<URL:http://dotnet.mvps.org/>

Nov 20 '05 #21

P: n/a
"Cablewizard" <Ca*********@Yahoo.com> wrote in message news:<#T**************@TK2MSFTNGP12.phx.gbl>...
Additionally, by storing the Boolean value as all 1's or all 0's, you get an
additional performance gain when dealing with IO, as you can ignore the Byte
Ordering (Little/Middle/Big Endean). 1111 = 1111 forward or backward.
What the heck is middle endian? ;^) The other two, being a Mac user
when I'm not at work, have pretty clear meanings.

But this makes some sense for why things are -1 and not 1 or (as you
point out below) -0. False is "everything off" and True is "every
switch on". And 11111111 = -1 -- but it's not -1 in signed Integers,
which is confusing. That'd be 10000001, right?
IF you did have a 1 Bit signed integer type, if the value was not 0, it would
be -1 (or -0, but that's a discussion for another day)
Actually that makes sense and we can dispense with the new discussion!
;^) That's part of my original wondering about why it works as it
does. 10000000 in a signed byte would be "negative 0000000". I've
done just enough assembly
(http://mywebpages.comcast.net/rufbo1/mactari/mact.html) to have a
decent handle on how the bits work. It would have made sense to me to
have "official True" be 10000000 (a quick rol to check) or 00000001
(1=1), but 10000001 (signed -1) didn't make much sense at all.

Now, again, 11111111 is -1 in unsigned bytes, right? That makes some
sense as True.

Though admittedly I'm starting to wonder if I haven't got negative
signed byte expressions off in my head, as everyone seems to accept
11111111 (realizing I'm simplifying everything using one byte instead
of 32 bit fun) as -1 *signed*. ???
A Boolean evaluation of an expression is actually a double negative.
If an expression does not evaluate to False(0), then it must be True. So all non
zero values are True.


That's been mentioned a few times, and immediately after posting it
occurred to me to check and see what CBool(1) gave me (True). But why
CInt(True) gave me -1 for "official True" was a mystery. I think the
"all bits True in the byte" makes the most sense. Whether that's -1
in signed- or unsigned-land, I'll straighten out in my head later!

Thanks everyone for the posts. Feel free to lambast my relatively
weak grasping of the situation! If I didn't have so many bad habits
already, the "Turn on Option Strict" post my be the best suggestion of
them all.

Ruffin Bailey
Nov 20 '05 #22

P: n/a
"Cablewizard" <Ca*********@Yahoo.com> wrote in message news:<#T**************@TK2MSFTNGP12.phx.gbl>...
Additionally, by storing the Boolean value as all 1's or all 0's, you get an
additional performance gain when dealing with IO, as you can ignore the Byte
Ordering (Little/Middle/Big Endean). 1111 = 1111 forward or backward.
What the heck is middle endian? ;^) The other two, being a Mac user
when I'm not at work, have pretty clear meanings.

But this makes some sense for why things are -1 and not 1 or (as you
point out below) -0. False is "everything off" and True is "every
switch on". And 11111111 = -1 -- but it's not -1 in signed Integers,
which is confusing. That'd be 10000001, right?
IF you did have a 1 Bit signed integer type, if the value was not 0, it would
be -1 (or -0, but that's a discussion for another day)
Actually that makes sense and we can dispense with the new discussion!
;^) That's part of my original wondering about why it works as it
does. 10000000 in a signed byte would be "negative 0000000". I've
done just enough assembly
(http://mywebpages.comcast.net/rufbo1/mactari/mact.html) to have a
decent handle on how the bits work. It would have made sense to me to
have "official True" be 10000000 (a quick rol to check) or 00000001
(1=1), but 10000001 (signed -1) didn't make much sense at all.

Now, again, 11111111 is -1 in unsigned bytes, right? That makes some
sense as True.

Though admittedly I'm starting to wonder if I haven't got negative
signed byte expressions off in my head, as everyone seems to accept
11111111 (realizing I'm simplifying everything using one byte instead
of 32 bit fun) as -1 *signed*. ???
A Boolean evaluation of an expression is actually a double negative.
If an expression does not evaluate to False(0), then it must be True. So all non
zero values are True.


That's been mentioned a few times, and immediately after posting it
occurred to me to check and see what CBool(1) gave me (True). But why
CInt(True) gave me -1 for "official True" was a mystery. I think the
"all bits True in the byte" makes the most sense. Whether that's -1
in signed- or unsigned-land, I'll straighten out in my head later!

Thanks everyone for the posts. Feel free to lambast my relatively
weak grasping of the situation! If I didn't have so many bad habits
already, the "Turn on Option Strict" post my be the best suggestion of
them all.

Ruffin Bailey
Nov 20 '05 #23

P: n/a
Ruffin,

Middle Endian is when you fiddle with the byte ordering in the Word.
1-2-3-4 Big Endian
4-3-2-1 Little Endian
3-4-1-2 / 2-1-3-4 Middle Endian

Take a look at:
http://en.wikipedia.org/wiki/Endianness

This works best on 16Bit computers living in the 32Bit world.
In fact, some IO in Windows still works this way.

Your confusion on the representation of Bits and why -1=11111111 and not
10000001 is because you are thinking as a human :)
Binary math is simplified if you use "Two's Complement" representation of the
binary for negative numbers.
That is how computers like to do it.
This would make the representation backwards from what you would think.
Basically, take a positive representation, flip the bits, then add 1.
Take the value +1 = 0000001
Flip the bits to make negative = 1111110
Add 1 = 11111111

So 11111111 = -1 in a Signed Byte
In an Unsigned Byte, it would be 255

Hope this helps to clear things up a little.

Gerald
"Ruffin Bailey" <ka****@mailinator.com> wrote in message
news:fd**************************@posting.google.c om...
"Cablewizard" <Ca*********@Yahoo.com> wrote in message

news:<#T**************@TK2MSFTNGP12.phx.gbl>...
Additionally, by storing the Boolean value as all 1's or all 0's, you get an
additional performance gain when dealing with IO, as you can ignore the Byte
Ordering (Little/Middle/Big Endean). 1111 = 1111 forward or backward.


What the heck is middle endian? ;^) The other two, being a Mac user
when I'm not at work, have pretty clear meanings.

But this makes some sense for why things are -1 and not 1 or (as you
point out below) -0. False is "everything off" and True is "every
switch on". And 11111111 = -1 -- but it's not -1 in signed Integers,
which is confusing. That'd be 10000001, right?
IF you did have a 1 Bit signed integer type, if the value was not 0, it would be -1 (or -0, but that's a discussion for another day)


Actually that makes sense and we can dispense with the new discussion!
;^) That's part of my original wondering about why it works as it
does. 10000000 in a signed byte would be "negative 0000000". I've
done just enough assembly
(http://mywebpages.comcast.net/rufbo1/mactari/mact.html) to have a
decent handle on how the bits work. It would have made sense to me to
have "official True" be 10000000 (a quick rol to check) or 00000001
(1=1), but 10000001 (signed -1) didn't make much sense at all.

Now, again, 11111111 is -1 in unsigned bytes, right? That makes some
sense as True.

Though admittedly I'm starting to wonder if I haven't got negative
signed byte expressions off in my head, as everyone seems to accept
11111111 (realizing I'm simplifying everything using one byte instead
of 32 bit fun) as -1 *signed*. ???
A Boolean evaluation of an expression is actually a double negative.
If an expression does not evaluate to False(0), then it must be True. So all non zero values are True.


That's been mentioned a few times, and immediately after posting it
occurred to me to check and see what CBool(1) gave me (True). But why
CInt(True) gave me -1 for "official True" was a mystery. I think the
"all bits True in the byte" makes the most sense. Whether that's -1
in signed- or unsigned-land, I'll straighten out in my head later!

Thanks everyone for the posts. Feel free to lambast my relatively
weak grasping of the situation! If I didn't have so many bad habits
already, the "Turn on Option Strict" post my be the best suggestion of
them all.

Ruffin Bailey

Nov 20 '05 #24

P: n/a
As a side note, this is what makes the topic of -0 interesting.
If you do some research, you will see that when using two's complement, you get
a couple of interesting exceptions to the rule that would seemingly break
things. But when they are resolved, they actually end up being what you wanted
in the first place.
-0 and +128 would be of significant note.
This is why a signed byte ranges from -128 to +127.
And not +- 127 or +- 128.

Gerald

"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:eq**************@TK2MSFTNGP10.phx.gbl...
Ruffin,

Middle Endian is when you fiddle with the byte ordering in the Word.
1-2-3-4 Big Endian
4-3-2-1 Little Endian
3-4-1-2 / 2-1-3-4 Middle Endian

Take a look at:
http://en.wikipedia.org/wiki/Endianness

This works best on 16Bit computers living in the 32Bit world.
In fact, some IO in Windows still works this way.

Your confusion on the representation of Bits and why -1=11111111 and not
10000001 is because you are thinking as a human :)
Binary math is simplified if you use "Two's Complement" representation of the
binary for negative numbers.
That is how computers like to do it.
This would make the representation backwards from what you would think.
Basically, take a positive representation, flip the bits, then add 1.
Take the value +1 = 0000001
Flip the bits to make negative = 1111110
Add 1 = 11111111

So 11111111 = -1 in a Signed Byte
In an Unsigned Byte, it would be 255

Hope this helps to clear things up a little.

Gerald
"Ruffin Bailey" <ka****@mailinator.com> wrote in message
news:fd**************************@posting.google.c om...
"Cablewizard" <Ca*********@Yahoo.com> wrote in message news:<#T**************@TK2MSFTNGP12.phx.gbl>...
Additionally, by storing the Boolean value as all 1's or all 0's, you get an additional performance gain when dealing with IO, as you can ignore the Byte Ordering (Little/Middle/Big Endean). 1111 = 1111 forward or backward.


What the heck is middle endian? ;^) The other two, being a Mac user
when I'm not at work, have pretty clear meanings.

But this makes some sense for why things are -1 and not 1 or (as you
point out below) -0. False is "everything off" and True is "every
switch on". And 11111111 = -1 -- but it's not -1 in signed Integers,
which is confusing. That'd be 10000001, right?
IF you did have a 1 Bit signed integer type, if the value was not 0, it would be -1 (or -0, but that's a discussion for another day)


Actually that makes sense and we can dispense with the new discussion!
;^) That's part of my original wondering about why it works as it
does. 10000000 in a signed byte would be "negative 0000000". I've
done just enough assembly
(http://mywebpages.comcast.net/rufbo1/mactari/mact.html) to have a
decent handle on how the bits work. It would have made sense to me to
have "official True" be 10000000 (a quick rol to check) or 00000001
(1=1), but 10000001 (signed -1) didn't make much sense at all.

Now, again, 11111111 is -1 in unsigned bytes, right? That makes some
sense as True.

Though admittedly I'm starting to wonder if I haven't got negative
signed byte expressions off in my head, as everyone seems to accept
11111111 (realizing I'm simplifying everything using one byte instead
of 32 bit fun) as -1 *signed*. ???
A Boolean evaluation of an expression is actually a double negative.
If an expression does not evaluate to False(0), then it must be True. So
all non zero values are True.


That's been mentioned a few times, and immediately after posting it
occurred to me to check and see what CBool(1) gave me (True). But why
CInt(True) gave me -1 for "official True" was a mystery. I think the
"all bits True in the byte" makes the most sense. Whether that's -1
in signed- or unsigned-land, I'll straighten out in my head later!

Thanks everyone for the posts. Feel free to lambast my relatively
weak grasping of the situation! If I didn't have so many bad habits
already, the "Turn on Option Strict" post my be the best suggestion of
them all.

Ruffin Bailey


Nov 20 '05 #25

P: n/a
All of this is a bit superfluous really. I have discovered that the
compiler stores a +1 for True on the stack for a boolean type 'True' and 0
for a Boolean Type 'False'
--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:%2****************@TK2MSFTNGP11.phx.gbl...
As a side note, this is what makes the topic of -0 interesting.
If you do some research, you will see that when using two's complement, you get a couple of interesting exceptions to the rule that would seemingly break
things. But when they are resolved, they actually end up being what you wanted in the first place.
-0 and +128 would be of significant note.
This is why a signed byte ranges from -128 to +127.
And not +- 127 or +- 128.

Gerald

"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:eq**************@TK2MSFTNGP10.phx.gbl...
Ruffin,

Middle Endian is when you fiddle with the byte ordering in the Word.
1-2-3-4 Big Endian
4-3-2-1 Little Endian
3-4-1-2 / 2-1-3-4 Middle Endian

Take a look at:
http://en.wikipedia.org/wiki/Endianness

This works best on 16Bit computers living in the 32Bit world.
In fact, some IO in Windows still works this way.

Your confusion on the representation of Bits and why -1=11111111 and not
10000001 is because you are thinking as a human :)
Binary math is simplified if you use "Two's Complement" representation of the
binary for negative numbers.
That is how computers like to do it.
This would make the representation backwards from what you would think.
Basically, take a positive representation, flip the bits, then add 1.
Take the value +1 = 0000001
Flip the bits to make negative = 1111110
Add 1 = 11111111

So 11111111 = -1 in a Signed Byte
In an Unsigned Byte, it would be 255

Hope this helps to clear things up a little.

Gerald
"Ruffin Bailey" <ka****@mailinator.com> wrote in message
news:fd**************************@posting.google.c om...
"Cablewizard" <Ca*********@Yahoo.com> wrote in message news:<#T**************@TK2MSFTNGP12.phx.gbl>...
> Additionally, by storing the Boolean value as all 1's or all 0's, you get an > additional performance gain when dealing with IO, as you can ignore
the
Byte > Ordering (Little/Middle/Big Endean). 1111 = 1111 forward or
backward.
What the heck is middle endian? ;^) The other two, being a Mac user
when I'm not at work, have pretty clear meanings.

But this makes some sense for why things are -1 and not 1 or (as you
point out below) -0. False is "everything off" and True is "every
switch on". And 11111111 = -1 -- but it's not -1 in signed Integers,
which is confusing. That'd be 10000001, right?

> IF you did have a 1 Bit signed integer type, if the value was not 0,
it would
> be -1 (or -0, but that's a discussion for another day)

Actually that makes sense and we can dispense with the new discussion!
;^) That's part of my original wondering about why it works as it
does. 10000000 in a signed byte would be "negative 0000000". I've
done just enough assembly
(http://mywebpages.comcast.net/rufbo1/mactari/mact.html) to have a
decent handle on how the bits work. It would have made sense to me to
have "official True" be 10000000 (a quick rol to check) or 00000001
(1=1), but 10000001 (signed -1) didn't make much sense at all.

Now, again, 11111111 is -1 in unsigned bytes, right? That makes some
sense as True.

Though admittedly I'm starting to wonder if I haven't got negative
signed byte expressions off in my head, as everyone seems to accept
11111111 (realizing I'm simplifying everything using one byte instead
of 32 bit fun) as -1 *signed*. ???

> A Boolean evaluation of an expression is actually a double negative.
> If an expression does not evaluate to False(0), then it must be

True. So all
non
> zero values are True.

That's been mentioned a few times, and immediately after posting it
occurred to me to check and see what CBool(1) gave me (True). But why
CInt(True) gave me -1 for "official True" was a mystery. I think the
"all bits True in the byte" makes the most sense. Whether that's -1
in signed- or unsigned-land, I'll straighten out in my head later!

Thanks everyone for the posts. Feel free to lambast my relatively
weak grasping of the situation! If I didn't have so many bad habits
already, the "Turn on Option Strict" post my be the best suggestion of
them all.

Ruffin Bailey



Nov 20 '05 #26

P: n/a
Agreed. I haven't investigated the actual implementation in dotNet or the other
various languages.
This has become more about the mechanics of it all as opposed to the
implementation.
Although if one was to ask me to guess what the implementation would have been,
my guess would not have been +1.
So I do find the actual implementation to be interesting.

In the end, I think Greg deserves the vote for the "best" answer.
FALSE = 0
TRUE = !FALSE

But as it turns out, how you actually implement that can vary.

Gerald
There are 10 kinds of people in the world.
Those that understand Binary, and those that do not.
(oldie but goodie, and appropriate here)

"One Handed Man ( OHM - Terry Burns )" <news.microsoft.com> wrote in message
news:OX**************@TK2MSFTNGP12.phx.gbl...
All of this is a bit superfluous really. I have discovered that the
compiler stores a +1 for True on the stack for a boolean type 'True' and 0
for a Boolean Type 'False'
--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:%2****************@TK2MSFTNGP11.phx.gbl...
As a side note, this is what makes the topic of -0 interesting.
If you do some research, you will see that when using two's complement,

you get
a couple of interesting exceptions to the rule that would seemingly break
things. But when they are resolved, they actually end up being what you

wanted
in the first place.
-0 and +128 would be of significant note.
This is why a signed byte ranges from -128 to +127.
And not +- 127 or +- 128.

Gerald

"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:eq**************@TK2MSFTNGP10.phx.gbl...
Ruffin,

Middle Endian is when you fiddle with the byte ordering in the Word.
1-2-3-4 Big Endian
4-3-2-1 Little Endian
3-4-1-2 / 2-1-3-4 Middle Endian

Take a look at:
http://en.wikipedia.org/wiki/Endianness

This works best on 16Bit computers living in the 32Bit world.
In fact, some IO in Windows still works this way.

Your confusion on the representation of Bits and why -1=11111111 and not
10000001 is because you are thinking as a human :)
Binary math is simplified if you use "Two's Complement" representation of the binary for negative numbers.
That is how computers like to do it.
This would make the representation backwards from what you would think.
Basically, take a positive representation, flip the bits, then add 1.
Take the value +1 = 0000001
Flip the bits to make negative = 1111110
Add 1 = 11111111

So 11111111 = -1 in a Signed Byte
In an Unsigned Byte, it would be 255

Hope this helps to clear things up a little.

Gerald
"Ruffin Bailey" <ka****@mailinator.com> wrote in message
news:fd**************************@posting.google.c om...
> "Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:<#T**************@TK2MSFTNGP12.phx.gbl>...
> > Additionally, by storing the Boolean value as all 1's or all 0's, you get
an
> > additional performance gain when dealing with IO, as you can ignore

the
Byte
> > Ordering (Little/Middle/Big Endean). 1111 = 1111 forward or

backward. >
> What the heck is middle endian? ;^) The other two, being a Mac user
> when I'm not at work, have pretty clear meanings.
>
> But this makes some sense for why things are -1 and not 1 or (as you
> point out below) -0. False is "everything off" and True is "every
> switch on". And 11111111 = -1 -- but it's not -1 in signed Integers,
> which is confusing. That'd be 10000001, right?
>
> > IF you did have a 1 Bit signed integer type, if the value was not 0, it would
> > be -1 (or -0, but that's a discussion for another day)
>
> Actually that makes sense and we can dispense with the new discussion!
> ;^) That's part of my original wondering about why it works as it
> does. 10000000 in a signed byte would be "negative 0000000". I've
> done just enough assembly
> (http://mywebpages.comcast.net/rufbo1/mactari/mact.html) to have a
> decent handle on how the bits work. It would have made sense to me to
> have "official True" be 10000000 (a quick rol to check) or 00000001
> (1=1), but 10000001 (signed -1) didn't make much sense at all.
>
> Now, again, 11111111 is -1 in unsigned bytes, right? That makes some
> sense as True.
>
> Though admittedly I'm starting to wonder if I haven't got negative
> signed byte expressions off in my head, as everyone seems to accept
> 11111111 (realizing I'm simplifying everything using one byte instead
> of 32 bit fun) as -1 *signed*. ???
>
> > A Boolean evaluation of an expression is actually a double negative.
> > If an expression does not evaluate to False(0), then it must be

True. So
all
non
> > zero values are True.
>
> That's been mentioned a few times, and immediately after posting it
> occurred to me to check and see what CBool(1) gave me (True). But why
> CInt(True) gave me -1 for "official True" was a mystery. I think the
> "all bits True in the byte" makes the most sense. Whether that's -1
> in signed- or unsigned-land, I'll straighten out in my head later!
>
> Thanks everyone for the posts. Feel free to lambast my relatively
> weak grasping of the situation! If I didn't have so many bad habits
> already, the "Turn on Option Strict" post my be the best suggestion of
> them all.
>
> Ruffin Bailey



Nov 20 '05 #27

P: n/a
I just miss using while(3) in C :(
"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:%2****************@TK2MSFTNGP12.phx.gbl...
Agreed. I haven't investigated the actual implementation in dotNet or the other various languages.
This has become more about the mechanics of it all as opposed to the
implementation.
Although if one was to ask me to guess what the implementation would have been, my guess would not have been +1.
So I do find the actual implementation to be interesting.

In the end, I think Greg deserves the vote for the "best" answer.
FALSE = 0
TRUE = !FALSE

But as it turns out, how you actually implement that can vary.

Gerald
There are 10 kinds of people in the world.
Those that understand Binary, and those that do not.
(oldie but goodie, and appropriate here)

"One Handed Man ( OHM - Terry Burns )" <news.microsoft.com> wrote in message news:OX**************@TK2MSFTNGP12.phx.gbl...
All of this is a bit superfluous really. I have discovered that the
compiler stores a +1 for True on the stack for a boolean type 'True' and 0 for a Boolean Type 'False'
--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:%2****************@TK2MSFTNGP11.phx.gbl...
As a side note, this is what makes the topic of -0 interesting.
If you do some research, you will see that when using two's complement,
you get
a couple of interesting exceptions to the rule that would seemingly
break things. But when they are resolved, they actually end up being what you wanted
in the first place.
-0 and +128 would be of significant note.
This is why a signed byte ranges from -128 to +127.
And not +- 127 or +- 128.

Gerald

"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:eq**************@TK2MSFTNGP10.phx.gbl...
> Ruffin,
>
> Middle Endian is when you fiddle with the byte ordering in the Word.
> 1-2-3-4 Big Endian
> 4-3-2-1 Little Endian
> 3-4-1-2 / 2-1-3-4 Middle Endian
>
> Take a look at:
> http://en.wikipedia.org/wiki/Endianness
>
> This works best on 16Bit computers living in the 32Bit world.
> In fact, some IO in Windows still works this way.
>
> Your confusion on the representation of Bits and why -1=11111111 and
not > 10000001 is because you are thinking as a human :)
> Binary math is simplified if you use "Two's Complement" representation of the
> binary for negative numbers.
> That is how computers like to do it.
> This would make the representation backwards from what you would
think. > Basically, take a positive representation, flip the bits, then add 1. > Take the value +1 = 0000001
> Flip the bits to make negative = 1111110
> Add 1 = 11111111
>
> So 11111111 = -1 in a Signed Byte
> In an Unsigned Byte, it would be 255
>
> Hope this helps to clear things up a little.
>
> Gerald
>
>
> "Ruffin Bailey" <ka****@mailinator.com> wrote in message
> news:fd**************************@posting.google.c om...
> > "Cablewizard" <Ca*********@Yahoo.com> wrote in message
> news:<#T**************@TK2MSFTNGP12.phx.gbl>...
> > > Additionally, by storing the Boolean value as all 1's or all 0's, you get
an
> > > additional performance gain when dealing with IO, as you can
ignore the
Byte
> > > Ordering (Little/Middle/Big Endean). 1111 = 1111 forward or

backward.
> >
> > What the heck is middle endian? ;^) The other two, being a Mac
user > > when I'm not at work, have pretty clear meanings.
> >
> > But this makes some sense for why things are -1 and not 1 or (as you > > point out below) -0. False is "everything off" and True is "every
> > switch on". And 11111111 = -1 -- but it's not -1 in signed Integers, > > which is confusing. That'd be 10000001, right?
> >
> > > IF you did have a 1 Bit signed integer type, if the value was not 0, it
> would
> > > be -1 (or -0, but that's a discussion for another day)
> >
> > Actually that makes sense and we can dispense with the new

discussion! > > ;^) That's part of my original wondering about why it works as it
> > does. 10000000 in a signed byte would be "negative 0000000". I've > > done just enough assembly
> > (http://mywebpages.comcast.net/rufbo1/mactari/mact.html) to have a
> > decent handle on how the bits work. It would have made sense to me to > > have "official True" be 10000000 (a quick rol to check) or 00000001 > > (1=1), but 10000001 (signed -1) didn't make much sense at all.
> >
> > Now, again, 11111111 is -1 in unsigned bytes, right? That makes some > > sense as True.
> >
> > Though admittedly I'm starting to wonder if I haven't got negative
> > signed byte expressions off in my head, as everyone seems to accept > > 11111111 (realizing I'm simplifying everything using one byte instead > > of 32 bit fun) as -1 *signed*. ???
> >
> > > A Boolean evaluation of an expression is actually a double negative. > > > If an expression does not evaluate to False(0), then it must be

True. So
all
> non
> > > zero values are True.
> >
> > That's been mentioned a few times, and immediately after posting it > > occurred to me to check and see what CBool(1) gave me (True). But why > > CInt(True) gave me -1 for "official True" was a mystery. I think the > > "all bits True in the byte" makes the most sense. Whether that's -1 > > in signed- or unsigned-land, I'll straighten out in my head later!
> >
> > Thanks everyone for the posts. Feel free to lambast my relatively
> > weak grasping of the situation! If I didn't have so many bad habits > > already, the "Turn on Option Strict" post my be the best suggestion of > > them all.
> >
> > Ruffin Bailey
>
>



Nov 20 '05 #28

P: n/a
Like you, my 'C' days are well behind me ( probably for quite a few years ),
but I did used to have fun with it. My first program was to write a Star
Trek simulation ( text based ).

Ahh where did all those years go. ?

--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Greg Young" <gr********@planetbeach.com> wrote in message
news:e1*************@tk2msftngp13.phx.gbl...
I just miss using while(3) in C :(
"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:%2****************@TK2MSFTNGP12.phx.gbl...
Agreed. I haven't investigated the actual implementation in dotNet or the
other
various languages.
This has become more about the mechanics of it all as opposed to the
implementation.
Although if one was to ask me to guess what the implementation would
have been,
my guess would not have been +1.
So I do find the actual implementation to be interesting.

In the end, I think Greg deserves the vote for the "best" answer.
FALSE = 0
TRUE = !FALSE

But as it turns out, how you actually implement that can vary.

Gerald
There are 10 kinds of people in the world.
Those that understand Binary, and those that do not.
(oldie but goodie, and appropriate here)

"One Handed Man ( OHM - Terry Burns )" <news.microsoft.com> wrote in message
news:OX**************@TK2MSFTNGP12.phx.gbl...
All of this is a bit superfluous really. I have discovered that the
compiler stores a +1 for True on the stack for a boolean type 'True'

and 0 for a Boolean Type 'False'
--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:%2****************@TK2MSFTNGP11.phx.gbl...
> As a side note, this is what makes the topic of -0 interesting.
> If you do some research, you will see that when using two's complement, you get
> a couple of interesting exceptions to the rule that would seemingly break > things. But when they are resolved, they actually end up being what you wanted
> in the first place.
> -0 and +128 would be of significant note.
> This is why a signed byte ranges from -128 to +127.
> And not +- 127 or +- 128.
>
> Gerald
>
> "Cablewizard" <Ca*********@Yahoo.com> wrote in message
> news:eq**************@TK2MSFTNGP10.phx.gbl...
> > Ruffin,
> >
> > Middle Endian is when you fiddle with the byte ordering in the Word. > > 1-2-3-4 Big Endian
> > 4-3-2-1 Little Endian
> > 3-4-1-2 / 2-1-3-4 Middle Endian
> >
> > Take a look at:
> > http://en.wikipedia.org/wiki/Endianness
> >
> > This works best on 16Bit computers living in the 32Bit world.
> > In fact, some IO in Windows still works this way.
> >
> > Your confusion on the representation of Bits and why -1=11111111 and not
> > 10000001 is because you are thinking as a human :)
> > Binary math is simplified if you use "Two's Complement" representation of the
> > binary for negative numbers.
> > That is how computers like to do it.
> > This would make the representation backwards from what you would think. > > Basically, take a positive representation, flip the bits, then add 1. > > Take the value +1 = 0000001
> > Flip the bits to make negative = 1111110
> > Add 1 = 11111111
> >
> > So 11111111 = -1 in a Signed Byte
> > In an Unsigned Byte, it would be 255
> >
> > Hope this helps to clear things up a little.
> >
> > Gerald
> >
> >
> > "Ruffin Bailey" <ka****@mailinator.com> wrote in message
> > news:fd**************************@posting.google.c om...
> > > "Cablewizard" <Ca*********@Yahoo.com> wrote in message
> > news:<#T**************@TK2MSFTNGP12.phx.gbl>...
> > > > Additionally, by storing the Boolean value as all 1's or all 0's, you get
> an
> > > > additional performance gain when dealing with IO, as you can ignore the
> Byte
> > > > Ordering (Little/Middle/Big Endean). 1111 = 1111 forward or
backward.
> > >
> > > What the heck is middle endian? ;^) The other two, being a Mac user > > > when I'm not at work, have pretty clear meanings.
> > >
> > > But this makes some sense for why things are -1 and not 1 or (as you > > > point out below) -0. False is "everything off" and True is
"every > > > switch on". And 11111111 = -1 -- but it's not -1 in signed
Integers, > > > which is confusing. That'd be 10000001, right?
> > >
> > > > IF you did have a 1 Bit signed integer type, if the value was not 0, it
> > would
> > > > be -1 (or -0, but that's a discussion for another day)
> > >
> > > Actually that makes sense and we can dispense with the new discussion! > > > ;^) That's part of my original wondering about why it works as it > > > does. 10000000 in a signed byte would be "negative 0000000". I've > > > done just enough assembly
> > > (http://mywebpages.comcast.net/rufbo1/mactari/mact.html) to have a > > > decent handle on how the bits work. It would have made sense to me to > > > have "official True" be 10000000 (a quick rol to check) or 00000001 > > > (1=1), but 10000001 (signed -1) didn't make much sense at all.
> > >
> > > Now, again, 11111111 is -1 in unsigned bytes, right? That makes some > > > sense as True.
> > >
> > > Though admittedly I'm starting to wonder if I haven't got negative > > > signed byte expressions off in my head, as everyone seems to accept > > > 11111111 (realizing I'm simplifying everything using one byte instead > > > of 32 bit fun) as -1 *signed*. ???
> > >
> > > > A Boolean evaluation of an expression is actually a double negative. > > > > If an expression does not evaluate to False(0), then it must be True. So
> all
> > non
> > > > zero values are True.
> > >
> > > That's been mentioned a few times, and immediately after posting it > > > occurred to me to check and see what CBool(1) gave me (True). But why
> > > CInt(True) gave me -1 for "official True" was a mystery. I
think
the > > > "all bits True in the byte" makes the most sense. Whether that's -1 > > > in signed- or unsigned-land, I'll straighten out in my head
later! > > >
> > > Thanks everyone for the posts. Feel free to lambast my relatively > > > weak grasping of the situation! If I didn't have so many bad

habits > > > already, the "Turn on Option Strict" post my be the best suggestion of > > > them all.
> > >
> > > Ruffin Bailey
> >
> >
>
>



Nov 20 '05 #29

P: n/a
Hey did u ever write assember for the BBC home computer ? Remember the 6502
processor with three registers, one accumulator and two addressing
registers. ?

I was only about 17 at the time, but it was big time fun. Those were the
days, no IDE, just peices of paper to do your design and fine mind !

--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:%2****************@TK2MSFTNGP12.phx.gbl...
Agreed. I haven't investigated the actual implementation in dotNet or the other various languages.
This has become more about the mechanics of it all as opposed to the
implementation.
Although if one was to ask me to guess what the implementation would have been, my guess would not have been +1.
So I do find the actual implementation to be interesting.

In the end, I think Greg deserves the vote for the "best" answer.
FALSE = 0
TRUE = !FALSE

But as it turns out, how you actually implement that can vary.

Gerald
There are 10 kinds of people in the world.
Those that understand Binary, and those that do not.
(oldie but goodie, and appropriate here)

"One Handed Man ( OHM - Terry Burns )" <news.microsoft.com> wrote in message news:OX**************@TK2MSFTNGP12.phx.gbl...
All of this is a bit superfluous really. I have discovered that the
compiler stores a +1 for True on the stack for a boolean type 'True' and 0 for a Boolean Type 'False'
--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:%2****************@TK2MSFTNGP11.phx.gbl...
As a side note, this is what makes the topic of -0 interesting.
If you do some research, you will see that when using two's complement,
you get
a couple of interesting exceptions to the rule that would seemingly
break things. But when they are resolved, they actually end up being what you wanted
in the first place.
-0 and +128 would be of significant note.
This is why a signed byte ranges from -128 to +127.
And not +- 127 or +- 128.

Gerald

"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:eq**************@TK2MSFTNGP10.phx.gbl...
> Ruffin,
>
> Middle Endian is when you fiddle with the byte ordering in the Word.
> 1-2-3-4 Big Endian
> 4-3-2-1 Little Endian
> 3-4-1-2 / 2-1-3-4 Middle Endian
>
> Take a look at:
> http://en.wikipedia.org/wiki/Endianness
>
> This works best on 16Bit computers living in the 32Bit world.
> In fact, some IO in Windows still works this way.
>
> Your confusion on the representation of Bits and why -1=11111111 and
not > 10000001 is because you are thinking as a human :)
> Binary math is simplified if you use "Two's Complement" representation of the
> binary for negative numbers.
> That is how computers like to do it.
> This would make the representation backwards from what you would
think. > Basically, take a positive representation, flip the bits, then add 1. > Take the value +1 = 0000001
> Flip the bits to make negative = 1111110
> Add 1 = 11111111
>
> So 11111111 = -1 in a Signed Byte
> In an Unsigned Byte, it would be 255
>
> Hope this helps to clear things up a little.
>
> Gerald
>
>
> "Ruffin Bailey" <ka****@mailinator.com> wrote in message
> news:fd**************************@posting.google.c om...
> > "Cablewizard" <Ca*********@Yahoo.com> wrote in message
> news:<#T**************@TK2MSFTNGP12.phx.gbl>...
> > > Additionally, by storing the Boolean value as all 1's or all 0's, you get
an
> > > additional performance gain when dealing with IO, as you can
ignore the
Byte
> > > Ordering (Little/Middle/Big Endean). 1111 = 1111 forward or

backward.
> >
> > What the heck is middle endian? ;^) The other two, being a Mac
user > > when I'm not at work, have pretty clear meanings.
> >
> > But this makes some sense for why things are -1 and not 1 or (as you > > point out below) -0. False is "everything off" and True is "every
> > switch on". And 11111111 = -1 -- but it's not -1 in signed Integers, > > which is confusing. That'd be 10000001, right?
> >
> > > IF you did have a 1 Bit signed integer type, if the value was not 0, it
> would
> > > be -1 (or -0, but that's a discussion for another day)
> >
> > Actually that makes sense and we can dispense with the new

discussion! > > ;^) That's part of my original wondering about why it works as it
> > does. 10000000 in a signed byte would be "negative 0000000". I've > > done just enough assembly
> > (http://mywebpages.comcast.net/rufbo1/mactari/mact.html) to have a
> > decent handle on how the bits work. It would have made sense to me to > > have "official True" be 10000000 (a quick rol to check) or 00000001 > > (1=1), but 10000001 (signed -1) didn't make much sense at all.
> >
> > Now, again, 11111111 is -1 in unsigned bytes, right? That makes some > > sense as True.
> >
> > Though admittedly I'm starting to wonder if I haven't got negative
> > signed byte expressions off in my head, as everyone seems to accept > > 11111111 (realizing I'm simplifying everything using one byte instead > > of 32 bit fun) as -1 *signed*. ???
> >
> > > A Boolean evaluation of an expression is actually a double negative. > > > If an expression does not evaluate to False(0), then it must be

True. So
all
> non
> > > zero values are True.
> >
> > That's been mentioned a few times, and immediately after posting it > > occurred to me to check and see what CBool(1) gave me (True). But why > > CInt(True) gave me -1 for "official True" was a mystery. I think the > > "all bits True in the byte" makes the most sense. Whether that's -1 > > in signed- or unsigned-land, I'll straighten out in my head later!
> >
> > Thanks everyone for the posts. Feel free to lambast my relatively
> > weak grasping of the situation! If I didn't have so many bad habits > > already, the "Turn on Option Strict" post my be the best suggestion of > > them all.
> >
> > Ruffin Bailey
>
>



Nov 20 '05 #30

P: n/a
Actually, I have been rejecting another thought on this ( or rather
rejecting bringing it up ), but in JavaScript, when you look for an index of
something, it will either return False ( -1) or the index of the string. Now
once upon a time, ( and I might be wrong ), I thought that VB used to have
'1' based strings so to return '0' ( False ) meant you could not find the
occurance, this makes perfect sense to me if your strings are '1' based,
just as it does to return (-1) if your strings are '0' based.

Perhaps this is what made the difference ?

--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Ruffin Bailey" <ka****@mailinator.com> wrote in message
news:fd**************************@posting.google.c om...
I coulda sworn I was given an explanation during an AppDev class years
ago for VB6, but don't recall the answer. Why is it that -1 is True
in Visual Basic (and now VB.NET)? Bit flags seem like they should
always be 0 or 1 to me... (not that I haven't used VB long enough by
now to know better).

Sorry to pester, but "why is -1 = true?" is a difficult thing to
Google!

Ruffin Bailey

Nov 20 '05 #31

P: n/a
"Cablewizard" <Ca*********@Yahoo.com> wrote in message news:<#w**************@TK2MSFTNGP12.phx.gbl>...
In the end, I think Greg deserves the vote for the "best" answer.
FALSE = 0
TRUE = !FALSE
Well, the obvious interesting question comes from why CBool(True)
gives -1. If Option Strict is Off, TRUE = !0 works, but the
curiousity is why "official True" is -1, not 1. Practically speaking
there's no reason to bother, I suppose, but my neurosis makes things
like this bother me. ;^)

Tried a few bits of C# to see if there's an equivalent (fraid I
haven't had much programming time in C#, though a good deal in Java),
and it seems to essentially "have Option Strict On", like it or not:
Console.WriteLine((int)true);
Console.WriteLine((Boolean)-1);
Console.WriteLine((Boolean)1);
Console.WriteLine((Boolean)0);

Each line reported an invalid cast. So that's not much help. Neither
was BooleanConverter, which just converted Booleans to strings and not
from ints to booleans or booleans to ints.

It's interesting that the compiler uses 1; might be interesting to
decompile some VB6 (or, better perhaps, VB1) code to see what it did
and see if there's a technical reason behind the -1.
So 11111111 = -1 in a Signed Byte
In an Unsigned Byte, it would be 255


Except when you take "unsigned 0" and subtract one -- which will still
give you 11111111 (right? In the only hardware I know reasonably
well, you'll have a flag set that you've gone negative with the
substraction, I believe, but that's about it; the byte still holds
11111111). That's why I was thinking -1 = 11111111 in unsigned bytes
(so perhaps the right result from faulty reasoning).

I had the idea that signed bytes used the 7 bit (76543210) to store
the sign, apparently correctly, but sure did bork the storage of the
rest of the byte. Oh well, time to finally learn two's complement.
(Decent explanation: http://www.duke.edu/~twf/cps104/twoscomp.html)

Thanks for the help,

Ruffin Bailey

PS -- Talk about rethreading your head. Middle endianness is a mess!
Nov 20 '05 #32

P: n/a
LOL! Been there.
I moved a couple years ago, and just recently got around to sorting some of the
boxes.
In one box I had a handful of the 6502's still in original package.
Along with a whole mess of other things that might be fit for a history museum.

Gerald

"One Handed Man ( OHM - Terry Burns )" <news.microsoft.com> wrote in message
news:Ok**************@TK2MSFTNGP11.phx.gbl...
Hey did u ever write assember for the BBC home computer ? Remember the 6502
processor with three registers, one accumulator and two addressing
registers. ?

I was only about 17 at the time, but it was big time fun. Those were the
days, no IDE, just peices of paper to do your design and fine mind !

--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:%2****************@TK2MSFTNGP12.phx.gbl...
Agreed. I haven't investigated the actual implementation in dotNet or the

other
various languages.
This has become more about the mechanics of it all as opposed to the
implementation.
Although if one was to ask me to guess what the implementation would have

been,
my guess would not have been +1.
So I do find the actual implementation to be interesting.

In the end, I think Greg deserves the vote for the "best" answer.
FALSE = 0
TRUE = !FALSE

But as it turns out, how you actually implement that can vary.

Gerald
There are 10 kinds of people in the world.
Those that understand Binary, and those that do not.
(oldie but goodie, and appropriate here)

"One Handed Man ( OHM - Terry Burns )" <news.microsoft.com> wrote in

message
news:OX**************@TK2MSFTNGP12.phx.gbl...
All of this is a bit superfluous really. I have discovered that the
compiler stores a +1 for True on the stack for a boolean type 'True' and 0 for a Boolean Type 'False'
--

OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Cablewizard" <Ca*********@Yahoo.com> wrote in message
news:%2****************@TK2MSFTNGP11.phx.gbl...
> As a side note, this is what makes the topic of -0 interesting.
> If you do some research, you will see that when using two's complement, you get
> a couple of interesting exceptions to the rule that would seemingly break > things. But when they are resolved, they actually end up being what you wanted
> in the first place.
> -0 and +128 would be of significant note.
> This is why a signed byte ranges from -128 to +127.
> And not +- 127 or +- 128.
>
> Gerald
>
> "Cablewizard" <Ca*********@Yahoo.com> wrote in message
> news:eq**************@TK2MSFTNGP10.phx.gbl...
> > Ruffin,
> >
> > Middle Endian is when you fiddle with the byte ordering in the Word.
> > 1-2-3-4 Big Endian
> > 4-3-2-1 Little Endian
> > 3-4-1-2 / 2-1-3-4 Middle Endian
> >
> > Take a look at:
> > http://en.wikipedia.org/wiki/Endianness
> >
> > This works best on 16Bit computers living in the 32Bit world.
> > In fact, some IO in Windows still works this way.
> >
> > Your confusion on the representation of Bits and why -1=11111111 and not > > 10000001 is because you are thinking as a human :)
> > Binary math is simplified if you use "Two's Complement" representation of the
> > binary for negative numbers.
> > That is how computers like to do it.
> > This would make the representation backwards from what you would think. > > Basically, take a positive representation, flip the bits, then add 1. > > Take the value +1 = 0000001
> > Flip the bits to make negative = 1111110
> > Add 1 = 11111111
> >
> > So 11111111 = -1 in a Signed Byte
> > In an Unsigned Byte, it would be 255
> >
> > Hope this helps to clear things up a little.
> >
> > Gerald
> >
> >
> > "Ruffin Bailey" <ka****@mailinator.com> wrote in message
> > news:fd**************************@posting.google.c om...
> > > "Cablewizard" <Ca*********@Yahoo.com> wrote in message
> > news:<#T**************@TK2MSFTNGP12.phx.gbl>...
> > > > Additionally, by storing the Boolean value as all 1's or all 0's, you get
> an
> > > > additional performance gain when dealing with IO, as you can ignore the
> Byte
> > > > Ordering (Little/Middle/Big Endean). 1111 = 1111 forward or
backward.
> > >
> > > What the heck is middle endian? ;^) The other two, being a Mac user > > > when I'm not at work, have pretty clear meanings.
> > >
> > > But this makes some sense for why things are -1 and not 1 or (as you > > > point out below) -0. False is "everything off" and True is "every
> > > switch on". And 11111111 = -1 -- but it's not -1 in signed Integers, > > > which is confusing. That'd be 10000001, right?
> > >
> > > > IF you did have a 1 Bit signed integer type, if the value was not 0, it
> > would
> > > > be -1 (or -0, but that's a discussion for another day)
> > >
> > > Actually that makes sense and we can dispense with the new discussion! > > > ;^) That's part of my original wondering about why it works as it
> > > does. 10000000 in a signed byte would be "negative 0000000". I've > > > done just enough assembly
> > > (http://mywebpages.comcast.net/rufbo1/mactari/mact.html) to have a
> > > decent handle on how the bits work. It would have made sense to me to > > > have "official True" be 10000000 (a quick rol to check) or 00000001 > > > (1=1), but 10000001 (signed -1) didn't make much sense at all.
> > >
> > > Now, again, 11111111 is -1 in unsigned bytes, right? That makes some > > > sense as True.
> > >
> > > Though admittedly I'm starting to wonder if I haven't got negative
> > > signed byte expressions off in my head, as everyone seems to accept > > > 11111111 (realizing I'm simplifying everything using one byte instead > > > of 32 bit fun) as -1 *signed*. ???
> > >
> > > > A Boolean evaluation of an expression is actually a double negative. > > > > If an expression does not evaluate to False(0), then it must be
True. So
> all
> > non
> > > > zero values are True.
> > >
> > > That's been mentioned a few times, and immediately after posting it > > > occurred to me to check and see what CBool(1) gave me (True). But why > > > CInt(True) gave me -1 for "official True" was a mystery. I think the > > > "all bits True in the byte" makes the most sense. Whether that's -1 > > > in signed- or unsigned-land, I'll straighten out in my head later!
> > >
> > > Thanks everyone for the posts. Feel free to lambast my relatively
> > > weak grasping of the situation! If I didn't have so many bad habits > > > already, the "Turn on Option Strict" post my be the best suggestion of > > > them all.
> > >
> > > Ruffin Bailey
> >
> >
>
>



Nov 20 '05 #33

P: n/a
as expected ...

using System;

namespace ConsoleApplication1
{
/// <summary>
/// Summary description for Class1.
/// </summary>
class Class1
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main(string[] args)
{
Console.WriteLine(Convert.ToInt32(true));
Console.WriteLine(Convert.ToBoolean(-1));
Console.WriteLine(Convert.ToBoolean(1));
Console.WriteLine(Convert.ToBoolean(-325));
Console.WriteLine(Convert.ToBoolean(27));
Console.WriteLine(Convert.ToBoolean(0));
}
}
}
1
True
True
True
True
False
"Ruffin Bailey" <ka****@mailinator.com> wrote in message
news:fd**************************@posting.google.c om...
"Cablewizard" <Ca*********@Yahoo.com> wrote in message

news:<#w**************@TK2MSFTNGP12.phx.gbl>...
In the end, I think Greg deserves the vote for the "best" answer.
FALSE = 0
TRUE = !FALSE


Well, the obvious interesting question comes from why CBool(True)
gives -1. If Option Strict is Off, TRUE = !0 works, but the
curiousity is why "official True" is -1, not 1. Practically speaking
there's no reason to bother, I suppose, but my neurosis makes things
like this bother me. ;^)

Tried a few bits of C# to see if there's an equivalent (fraid I
haven't had much programming time in C#, though a good deal in Java),
and it seems to essentially "have Option Strict On", like it or not:
Console.WriteLine((int)true);
Console.WriteLine((Boolean)-1);
Console.WriteLine((Boolean)1);
Console.WriteLine((Boolean)0);

Each line reported an invalid cast. So that's not much help. Neither
was BooleanConverter, which just converted Booleans to strings and not
from ints to booleans or booleans to ints.

It's interesting that the compiler uses 1; might be interesting to
decompile some VB6 (or, better perhaps, VB1) code to see what it did
and see if there's a technical reason behind the -1.
So 11111111 = -1 in a Signed Byte
In an Unsigned Byte, it would be 255


Except when you take "unsigned 0" and subtract one -- which will still
give you 11111111 (right? In the only hardware I know reasonably
well, you'll have a flag set that you've gone negative with the
substraction, I believe, but that's about it; the byte still holds
11111111). That's why I was thinking -1 = 11111111 in unsigned bytes
(so perhaps the right result from faulty reasoning).

I had the idea that signed bytes used the 7 bit (76543210) to store
the sign, apparently correctly, but sure did bork the storage of the
rest of the byte. Oh well, time to finally learn two's complement.
(Decent explanation: http://www.duke.edu/~twf/cps104/twoscomp.html)

Thanks for the help,

Ruffin Bailey

PS -- Talk about rethreading your head. Middle endianness is a mess!

Nov 20 '05 #34

This discussion thread is closed

Replies have been disabled for this discussion.