473,466 Members | 1,377 Online
Bytes | Software Development & Data Engineering Community
Create Post

Home Posts Topics Members FAQ

What does '64 bit' mean? Lame question, but hear me out :)

Ok, first of all, let's get the obvious stuff out of the way. I'm an idiot. So please indulge me for a moment. Consider it an act of "community service"....

What does "64bit" mean to your friendly neighborhood C# programmer? The standard answer I get from computer sales people is: "It means that the CPU can process 64 bits of data at a time instead of 32." Ok... I guess I *kind* of understand what that means at an intuitive level, but what does it mean in practice? Consider the following code:

long l=1;
for(int i=0;i<5;i++){
Console.WriteLine("Emo says "+l);
l+=1;
}

How would this code run differently on a 64 bit processor as opposed to a 32 bit processor? Will it run twice as fast since the instructions are processed "64 bits at a time"? Will the 64 bit (long) variable 'l' be incremented more efficiently since now it can be done in a single processor instruction?

Now I want to ask about memory. I think this is the one benefit of 64bit computing that I DO understand. In a 32bit system, a memory pointer can only address 2^32 worth of process memory versus 2^64 worth of memory (wow!) in a 64bit system. I can see how this would be a major advantage for databases like SQL Server which could easily allocate over 4gigs of memory -- but is this a real advantage for a typical C# application?

Finally, I want to ask about interoperability. If I compile a 32bit C# app, will the ADO.NET code that it contains be able to communicate with the 64bit version of SQL Server?
Thanks for helping a newbie,

Larry


Nov 16 '05 #1
58 30150
I'm no guru when it comes to 64 bit, but here's my understanding.

"64 bit", in terms of raw speed, means "faster" for some operations,
although likely not the example you gave. Why? Because when moving
large amounts of data around, the processor can move them 64 bits at a
time instead of 32 bits of a time. Sort of like doubling the lanes on a
freeway. Now, if you're only moving one car around, doubling the lanes
makes no difference. It makes a difference only at rush hour.

You're right about the pointer thing. 64 bit processors can handle 2
billion times more memory than 32 bit processors. That's a lot of
memory. Of course, that's what they said about 64 KB way back when. :)

As far as compatibility, this is what I've heard.

Your example of talking to SQL Server through ADO.NET is the easiest to
answer: all will be well. Because data moving between ADO.NET and SQL
Server is (usually) serialized across a network, neither end knows what
processor the other end is using or even what language it's written in.
Ahh, the beauty of decoupling.

A more interesting question is whether your 64-bit .NET application
will be able to call old 32-bit DLLs to do things, or vice versa:
whether your 32-bit .NET application will be able to call
64-bit-compiled DLLs to do things. I know that all 64-bit processors
have 32-bit emulators built in, so they'll "downshift" to run 32-bit
code, but I can't recall what was said about one type calling the
other. I'll leave that to wiser folk.

Nov 16 '05 #2
AMD has a whitepaper available on some of the key points you can benefit
from 64-bit programming.

http://www.amd.com/us-en/assets/cont...hite_Paper.pdf

As far as your example goes, from 32 to 64 bit it would make no
difference what so ever. Your application needs to be programmed to
take advantage of the extra system capabilities.

Consider what games could do graphics wise with directX 6 versus what
they can do now with a high end video card and directX 9. Running an
ancient game (think of kings quest) on a system with directx 9 will not
make the game look any nicer, you need to write something to take
advantage of what dx9 and the hardware can offer.

DOS app's that were written to use extended memory (beyond the 640k
barrier) suddenly didn't get to use 4gb of ram when run under windows
with ton's of memory, they needed to be rewritten to take advantage of
the architecture.

Same with 64 bit. As a typical programmer you may not ever notice
different in your coding but you can bet the compiler will know what to
do with those extra registers.

Larry David wrote:
Ok, first of all, let's get the obvious stuff out of the way. I'm an idiot. So please indulge me for a moment. Consider it an act of "community service"....

What does "64bit" mean to your friendly neighborhood C# programmer? The standard answer I get from computer sales people is: "It means that the CPU can process 64 bits of data at a time instead of 32." Ok... I guess I *kind* of understand what that means at an intuitive level, but what does it mean in practice? Consider the following code:

long l=1;
for(int i=0;i<5;i++){
Console.WriteLine("Emo says "+l);
l+=1;
}

How would this code run differently on a 64 bit processor as opposed to a 32 bit processor? Will it run twice as fast since the instructions are processed "64 bits at a time"? Will the 64 bit (long) variable 'l' be incremented more efficiently since now it can be done in a single processor instruction?

Now I want to ask about memory. I think this is the one benefit of 64bit computing that I DO understand. In a 32bit system, a memory pointer can only address 2^32 worth of process memory versus 2^64 worth of memory (wow!) in a 64bit system. I can see how this would be a major advantage for databases like SQL Server which could easily allocate over 4gigs of memory -- but is this a real advantage for a typical C# application?

Finally, I want to ask about interoperability. If I compile a 32bit C# app, will the ADO.NET code that it contains be able to communicate with the 64bit version of SQL Server?
Thanks for helping a newbie,

Larry



Nov 16 '05 #3
"64 bit" is not a clearly defined label. It means that *something*
inside the CPU is 64 bit wide but it doesn't say what!

Generally, though, a 64-bit CPU can be expected to have a "word size"
of 64 bit. A "word" is the unit of data that the CPU can transport
and process without having to slice it up into smaller pieces.

So a 64-bit CPU should be able to perform 64-bit integer arithmetic at
the same speed as today's 32-bit CPUs perform 32-bit arithmetic. And
the size of a memory address should be 64 bit as well, which gives you
the increased memory range you mentioned.

But things immediately get a bit blurry again because the *physical*
memory range of a CPU might very well be restricted to less than 64
bit for technical or cost considerations; for instance, early Intel
32-bit CPUs actually had a 24-bit memory bus. On the other hand,
current 32-bit CPUs can actually process up to 80 bits internally at
once; but only in the floating-point processing unit (FPU).

And then there's the problem with wasted space. Lots of data actually
fits in 32 bits just fine, which is one reason why we're so slow to
move to 64-bit systems. Now when you have a 64-bit CPU but you
actually just need 32-bit numbers you have two choices: pack two
32-bit numbers each into a 64-bit word and waste time with packing &
unpacking; or only put one 32-bit number in a 64-bit word and waste
half the memory space, in main memory and in the CPU cache!

So whether a 64-bit CPU will actually speed up your application is
rather doubtful. You can only expect a significant gain if you're
already processing 64-bit integers. Likewise, the increased memory
range will only benefit you directly if you're rummaging through huge
databases; however, since operating systems and applications tend to
get bigger and bigger anyway, this should still benefit the user who
runs multiple programs at once.

The whole situation is quite a bit different from the 16-to-32 bit
swtich, from a perspective of expected gains. Back then everyone was
constantly bumping against the 16-bit range which simply isn't enough
to do much useful work, either in terms of value ranges or in terms of
memory space. We're slowly exhausting the 2 GB RAM Windows leaves for
apps but it's not critical yet, and 32 bits as a computational range
have proved sufficient for nearly anything...
--
http://www.kynosarges.de
Nov 16 '05 #4
Larry,

Mainframes use much longer more bits in there processors (registers).

It means that with less cycles there can be things quicker done. This was in
the beginning beside a very limited Operationset wherefore as well are
needed more cycles the main difference between a microprocessor and a
mainframe processor.

It is as well needed for memory adressing what is in fact much more done
than all your processing you make in your programs yourself.

When you know how much was done in the sixties with 8kb than you will be
suprissed why the memory from now is not enough, however now we want more
and more fast multimedia processing and for that are huge memorys needed to
get it done well.

Just my thought,

Cor
Nov 16 '05 #5
"Larry David" <My***************@HealthyChoice.org> wrote in
news:Qp********************@giganews.com...
...
Consider the following code:

long l=1;
for(int i=0;i<5;i++){
Console.WriteLine("Emo says "+l);
l+=1;
} How would this code run differently on a 64 bit processor as opposed
to
a 32 bit processor?
I didn't test it, but I'm quite sure that "Console.WriteLine" takes > 99% of
the time in this sample. This operation is probably memory-bound, i.e. the
CPU spends most of the time waiting for the RAM. A 64-bit memory interface
would probably make this a lot faster.
Will it run twice as fast since the instructions are
processed "64 bits at a time"?
Depends on many many other factors, like cache size, memory speed, graphics
speed...
Will the 64 bit (long) variable 'l' be
incremented more efficiently since now it can be done in a single
processor
instruction?
Probably yes. Current 32-bit processors do have 64-bit and 128-bit
registers/operations (MMX&SSE), but AFAIK neither the .net JIT nor VC++'s
native compiler emit these, so adding two longs takes 2 additions on a
current processor.
Another thing you should keep in mind is that the JIT (just as any good
compiler) tries to enregister variables, to reduce slow memory accesses.
Enregistering a 64-bit variable to 2 32-bit registers is quite expensive as
x86 processors don't have that many registers.
Now I want to ask about memory. I think this is the one benefit of
64bit
computing that I DO understand. In a 32bit system, a memory pointer can
only
address 2^32 worth of process memory versus 2^64 worth of memory (wow!) in
a
64bit system. I can see how this would be a major advantage for databases
like SQL Server which could easily allocate over 4gigs of memory -- but is
this a real advantage for a typical C# application?


You can turn of the GC and save lots of time ;-)

Seriously, if you need that much memory (e.g. for processing high-res
medical tomography data), you'll benefit from it; Otherwise, you probably
won't. A large address space does have other benefits (e.g. disc access is
often done through the memory interface as well, where 4 GB isn't that much
any more), but I think .net mostly shields you from those, because of its
portability.

Niki
Nov 16 '05 #6
Christoph Nahr wrote:
The whole situation is quite a bit different from the 16-to-32 bit
swtich, from a perspective of expected gains. Back then everyone was
constantly bumping against the 16-bit range which simply isn't enough
to do much useful work, either in terms of value ranges or in terms of
memory space. We're slowly exhausting the 2 GB RAM Windows leaves for
apps but it's not critical yet, and 32 bits as a computational range
have proved sufficient for nearly anything...


Weird, though, that the masses moved (or will move) from 16 to 32 to
64-bit machines in a relatively short time-frame, but will likely stay
at 64 for a much longer time. Decades? Maybe some will call me naive,
but it's hard to image anyone needing more addressability than what 64
bits offer...

Nov 16 '05 #7
In article <ld********************************@4ax.com>,
ch****@nospam.invalid says...
Christoph Nahr wrote:
The whole situation is quite a bit different from the 16-to-32 bit
swtich, from a perspective of expected gains. Back then everyone was
constantly bumping against the 16-bit range which simply isn't enough
to do much useful work, either in terms of value ranges or in terms of
memory space. We're slowly exhausting the 2 GB RAM Windows leaves for
apps but it's not critical yet, and 32 bits as a computational range
have proved sufficient for nearly anything...


Weird, though, that the masses moved (or will move) from 16 to 32 to
64-bit machines in a relatively short time-frame, but will likely stay
at 64 for a much longer time. Decades? Maybe some will call me naive,
but it's hard to image anyone needing more addressability than what 64
bits offer...


Well, if you believe Moore's law will remain in effect... Memory
doubles approximately every 1.5years. So it should have taken 24 years
(1.5*16) years to run out of 32 bits, which is not *too* far off.
Again ASSuming Moore holds, we should have 48 years left in 64b
processors. ...and another 96 years in 128bit processors. ;-)

OTOH, a 128bit FXU would go a long way in eliminating those dreaded
floats. ;-)

--
Keith
Nov 16 '05 #8
Larry David wrote:
What does "64bit" mean to your friendly neighborhood C# programmer?
The standard answer I get from computer sales people is: "It means that
the CPU can process 64 bits of data at a time instead of 32."


64-bit, IA64, et al. is 97% marketing hype that has little/no value to
consumers. However, microprocessor manufacturers are always looking for ways
to sell more chips, so what you are seeing is the start of a marketing blitz
which will undoubtedly focus on 'more is better'.

The fact of the matter is that 64-bit architectures will only benefit
large-scale (database) servers in that it will allow for a greatly expanded
addressable memory space. 64-bit file access is already possible under Win32.

In answer to your question, 64-bit means nothing to your 'friendly neighborhood
C# programmer'.
Nov 16 '05 #9
Bitstring <ld********************************@4ax.com>, from the
wonderful person chrisv <ch****@nospam.invalid> said
Christoph Nahr wrote:
The whole situation is quite a bit different from the 16-to-32 bit
swtich, from a perspective of expected gains. Back then everyone was
constantly bumping against the 16-bit range which simply isn't enough
to do much useful work, either in terms of value ranges or in terms of
memory space. We're slowly exhausting the 2 GB RAM Windows leaves for
apps but it's not critical yet, and 32 bits as a computational range
have proved sufficient for nearly anything...


Weird, though, that the masses moved (or will move) from 16 to 32 to
64-bit machines in a relatively short time-frame, but will likely stay
at 64 for a much longer time. Decades? Maybe some will call me naive,
but it's hard to image anyone needing more addressability than what 64
bits offer...


The steps are not linear though .. 32 bit was 64k times more address
space than 16 bit.

64 bit is 4,194,304k times bigger address space than 32bit .. a rather
taller step.

The next step to 128 bits is ridiculous - there isn't enough memory on
the planet to require a 128bit address right now (however I can think of
some uses for 128 bit math!).

Actually for most of us the main advantage is going to be faster 64bit
(and up) maths and more 64bit (and up) registers, and higher bandwidth.
All of which are really useful for video/photo editing and encoding and
similar stuff (and halfway useful for some maths intensive stuff).

'Hello world' will probably run no faster .. maybe slower .. and will
almost certainly be a larger executable.

--
GSV Three Minds in a Can
Outgoing Msgs are Turing Tested,and indistinguishable from human typing.
Nov 16 '05 #10
In article <5q**************@from.is.invalid>, GS*@quik.clara.co.uk
says...
Bitstring <ld********************************@4ax.com>, from the
wonderful person chrisv <ch****@nospam.invalid> said
Christoph Nahr wrote:
The whole situation is quite a bit different from the 16-to-32 bit
swtich, from a perspective of expected gains. Back then everyone was
constantly bumping against the 16-bit range which simply isn't enough
to do much useful work, either in terms of value ranges or in terms of
memory space. We're slowly exhausting the 2 GB RAM Windows leaves for
apps but it's not critical yet, and 32 bits as a computational range
have proved sufficient for nearly anything...
Weird, though, that the masses moved (or will move) from 16 to 32 to
64-bit machines in a relatively short time-frame, but will likely stay
at 64 for a much longer time. Decades? Maybe some will call me naive,
but it's hard to image anyone needing more addressability than what 64
bits offer...


The steps are not linear though .. 32 bit was 64k times more address
space than 16 bit.


Of course they aren't. Moore isn't either. Every bit doubles the
address space. Moore's "law" says that transistors (thus memory cells)
double every 18 months. Thus address bits are linear with time
(.67bits/year), if you believe Moore.
64 bit is 4,194,304k times bigger address space than 32bit .. a rather
taller step.
Nope, it's only 32 "Moore-intervals" bigger. Moore is logarithmic too.
The next step to 128 bits is ridiculous - there isn't enough memory on
the planet to require a 128bit address right now (however I can think of
some uses for 128 bit math!).
Oh, your not a believer in Moore. Tsk, tsk.
Actually for most of us the main advantage is going to be faster 64bit
(and up) maths and more 64bit (and up) registers, and higher bandwidth.
All of which are really useful for video/photo editing and encoding and
similar stuff (and halfway useful for some maths intensive stuff).
In this particular case, there is also an advantage to more registers.
But we're getting close to the virtual memory limit (which is in
reality about 2GB, not 4GB). 64b solves that problem for at least my
lifetime (less than 96 years ;-).
'Hello world' will probably run no faster .. maybe slower .. and will
almost certainly be a larger executable.


No reason for it to be larger at all. No reason for it to be slower
either. Since it's not doing anything, there is no reason to assume it
will be faster though.

--
Keith
Nov 16 '05 #11
Bruce Wood wrote:
A more interesting question is whether your 64-bit .NET application
will be able to call old 32-bit DLLs to do things, or vice versa:
whether your 32-bit .NET application will be able to call
64-bit-compiled DLLs to do things. I know that all 64-bit processors
have 32-bit emulators built in, so they'll "downshift" to run 32-bit
code, but I can't recall what was said about one type calling the
other. I'll leave that to wiser folk.


Well, actually the whole idea of DLL's is outdated in .NET isn't it? The
idea of .NET was to create a framework that is independent of
architecture (albeit mostly limited to Microsoft operating systems). So
a program once compiled doesn't care if its on a 32-bit processor or a
64-bit one, or even care if it's running on an x86-compatible processor
for that matter. There is no dependence on bittedness or instruction set.

Yousuf Khan
Nov 16 '05 #12
GSV Three Minds in a Can wrote:
'Hello world' will probably run no faster .. maybe slower .. and will
almost certainly be a larger executable.


I can recall writing a fully functional "hello world" program in
16-bytes, most of the space was used up holding the letters for "hello
world", and the rest were the instructions. :-)

Assembly language was a gas.

Yousuf Khan
Nov 16 '05 #13
Christoph Nahr wrote:
"64 bit" is not a clearly defined label. It means that *something*
inside the CPU is 64 bit wide but it doesn't say what!

Generally, though, a 64-bit CPU can be expected to have a "word size"
of 64 bit. A "word" is the unit of data that the CPU can transport
and process without having to slice it up into smaller pieces.
Actually, I think the word size is always the same size, 16-bit. 32-bit
is called double word (dword), and 64-bit is called quadword (qword).

I think what you're really trying talk about is called the "register size".
And then there's the problem with wasted space. Lots of data actually
fits in 32 bits just fine, which is one reason why we're so slow to
move to 64-bit systems. Now when you have a 64-bit CPU but you
actually just need 32-bit numbers you have two choices: pack two
32-bit numbers each into a 64-bit word and waste time with packing &
unpacking; or only put one 32-bit number in a 64-bit word and waste
half the memory space, in main memory and in the CPU cache!
There's not necessarily any wasted space, it depends on the 64-bit model
that Microsoft adopted for Windows. If you look at this link, it
discusses the various variations of the 64-bit model, such as LP64,
ILP64, & LLP64. I believe that Microsoft has chosen the LP64 model,
which means longs and pointers are 64-bit, but integers remain 32-bit.

64-BIT PROGRAMMING MODELS
http://www.opengroup.org/public/tech/aspen/lp64_wp.htm

LP64 recognizes the fact that perhaps most calculations won't require
using 64-bit integers (if you /really/ do need the 64-bit integer use
long), but their memory addressing modes certainly will.
So whether a 64-bit CPU will actually speed up your application is
rather doubtful. You can only expect a significant gain if you're
already processing 64-bit integers. Likewise, the increased memory
range will only benefit you directly if you're rummaging through huge
databases; however, since operating systems and applications tend to
get bigger and bigger anyway, this should still benefit the user who
runs multiple programs at once.
The 64-bit CPU will speed up your applications but not because of the
64-bit upgrade. Some CPU manufacturers have taken the opportunity to add
a lot of other features at the same time as they upgraded the registers.
For example, they took the opportunity to add faster memory interfaces
into the processor. They also doubled the number of registers in the
processor from 8 to 16 of them. So even if you never need to use the
full 64-bits, you still have access to twice as many 32-bit registers. Etc.
The whole situation is quite a bit different from the 16-to-32 bit
swtich, from a perspective of expected gains. Back then everyone was
constantly bumping against the 16-bit range which simply isn't enough
to do much useful work, either in terms of value ranges or in terms of
memory space. We're slowly exhausting the 2 GB RAM Windows leaves for
apps but it's not critical yet, and 32 bits as a computational range
have proved sufficient for nearly anything...


Actually it only seemed that way because the Intel 16-bit x86
instruction set was really a 20-bit memory addressing model. In other
words, the Intel 16-bit was an extended version of 16-bit. If Intel had
used a pure 16-bit memory model, then the maximum limit of memory
would've been really 64KB, and we would've been ready to switch to a
pure 32-bit instruction set probably by 1982. I don't know if you
remember computers like the Commodore 64 or the Apple II, which were
pure 16-bit addressing models. Intel extended the life of 16-bit by
almost a decade because of this one kludge. But it was a kludge, and
eventually all kludges come to a screeching halt and everybody clamours
to get away from them.

Yousuf Khan
Nov 16 '05 #14
On Fri, 21 Jan 2005 19:58:57 -0500, Yousuf Khan wrote:
Christoph Nahr wrote:
"64 bit" is not a clearly defined label. It means that *something*
inside the CPU is 64 bit wide but it doesn't say what!

Generally, though, a 64-bit CPU can be expected to have a "word size"
of 64 bit. A "word" is the unit of data that the CPU can transport
and process without having to slice it up into smaller pieces.
Actually, I think the word size is always the same size, 16-bit. 32-bit
is called double word (dword), and 64-bit is called quadword (qword).


Yousuf, you're just so PC! The term "word" has been used so many
different ways it's not possible to tell what it is without defining the
architecture. FOr example, a S/360 "word" is 32bits. A "word" was
originally the term used for the size of the register(s), or "bitness", if
you must. It's changed meaning several times since, but there is no
standard "word".
I think what you're really trying talk about is called the "register
size".


"bitness". ;-)

<snip>

--
Keith
Nov 16 '05 #15
On Fri, 21 Jan 2005 15:23:18 -0500, Keith R. Williams <kr*@att.bizzzz>
wrote:
64 bit is 4,194,304k times bigger address space than 32bit .. a rather
taller step.


Nope, it's only 32 "Moore-intervals" bigger. Moore is logarithmic too.


In other words, instead of being 4,194,304K steps up, it's only 32
Moore steps? ;)

p.s. sorry can't help it :pPpP

--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
Nov 16 '05 #16
On Fri, 21 Jan 2005 19:58:57 -0500, Yousuf Khan <bb****@ezrs.com>
wrote:
Generally, though, a 64-bit CPU can be expected to have a "word size"
of 64 bit. A "word" is the unit of data that the CPU can transport
and process without having to slice it up into smaller pieces.


Actually, I think the word size is always the same size, 16-bit. 32-bit
is called double word (dword), and 64-bit is called quadword (qword).


Don't think so this is universal, Yousuf. I remember some years back
getting rather confused trying to figure out some programming stuff
where some the docs I found kept refering to a "word" and threw in
32bit along the way.
--
L.Angel: I'm looking for web design work.
If you need basic to med complexity webpages at affordable rates, email me :)
Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
If you really want, FrontPage & DreamWeaver too.
But keep in mind you pay extra bandwidth for their bloated code
Nov 16 '05 #17
Larry David wrote:
How would this code run differently on a 64 bit processor as opposed
to a 32 bit processor? Will it run twice as fast since the instructions
are processed "64 bits at a time"? Will the 64 bit (long) variable 'l'
be incremented more efficiently since now it can be done in a single
processor instruction?


I can think of several ways in which 64-bit processors impact everyday
programming, although much of it applies to low-level systems
programming rather than high-level applications development:

1. They speed up extended-precision arithmetic. Many number-theoretic
programs, such as encryption algorithms, use very large integers - say,
1024 bits. They do calculations on these a word at a time, so the less
words, the more quickly they can operate (or alternatively, the larger
integers they can use in the same time). Extended-precision integers are
also an important basic datatype in many functional languages.

2. You can encode more stuff in your pointers. Considering no real
machines will have even 2^40 bytes of memory, we suddenly have a lot of
free bits inside pointers that we can use to encode additional
information about those references, which is a very useful trick in
interpreters, virtual machines, and garbage collectors.

3. Internal fragmentation of bitfields is decreased. If you have three
20-bit fields, you would need three 32-bit words to hold them, but only
one 64-bit field, assuming you didn't want them crossing word boundaries
(a good assumption if you want to modify them quickly). Now imagine a
million 20-bit fields.

4. They speed up bit array access. You can load, store, and manipulate
an array of up to 64 bits in a single register quickly. Some operations
on larger bit arrays are also sped up, such as finding the first set bit
(you can skip blocks of 64 zero bits at a time).

5. They can be used to perform some 32-bit operations more efficiently.
For example, you can divide by a 32-bit number by a constant by
performing a 64-bit multiply followed by a shift. As another example, if
you pack four 32-bit numbers into two 64-bit words, you can compute
their pairwise AND, OR, XOR, etc. in one operation. You can get a lot
cleverer than this, even creating custom algorithms that depend on
packing data into 64-bit numbers.

6. They can make accidental overflow in C programs less likely, since
the "int" type is liable to be 64 bits wide on such a machine (although
this is no substitute for overflow checking). Alternatively, you can use
the extra bits as a fast way of detecting overflow of 32-bit quantities,
in lieu of an overflow flag (or a way of checking it).

That's a few things, and they may impact your app indirectly, but by and
large it's more likely 64-bit machines will *break* your program than
make it faster. Be careful and never assume a certain word size, even
implicitly. In particular, don't serialize integers' bit patterns
directly to/from memory.
--
Derrick Coetzee, Microsoft Speech Server developer
This posting is provided "AS IS" with no warranties, and confers no
rights.
Nov 16 '05 #18
On Fri, 21 Jan 2005 19:58:57 -0500, Yousuf Khan <bb****@ezrs.com>
wrote:
Actually, I think the word size is always the same size, 16-bit. 32-bit
is called double word (dword), and 64-bit is called quadword (qword).


Yeah, it's become common usage to refer to 16 bits as a "word" but
originally the "word size" of a CPU means the width of its data and/or
address registers. The terminology kind of ossified in the 16-bit
days, hence the usage of "word" == 16 bits has stuck...
--
http://www.kynosarges.de
Nov 16 '05 #19
Bitstring <MP************************@news.individual.net> , from the
wonderful person Keith R. Williams <kr*@att.bizzzz> said
In article <5q**************@from.is.invalid>, GS*@quik.clara.co.uk
says...
Bitstring <ld********************************@4ax.com>, from the
wonderful person chrisv <ch****@nospam.invalid> said
>Christoph Nahr wrote:
>
>>The whole situation is quite a bit different from the 16-to-32 bit
>>swtich, from a perspective of expected gains. Back then everyone was
>>constantly bumping against the 16-bit range which simply isn't enough
>>to do much useful work, either in terms of value ranges or in terms of
>>memory space. We're slowly exhausting the 2 GB RAM Windows leaves for
>>apps but it's not critical yet, and 32 bits as a computational range
>>have proved sufficient for nearly anything...
>
>Weird, though, that the masses moved (or will move) from 16 to 32 to
>64-bit machines in a relatively short time-frame, but will likely stay
>at 64 for a much longer time. Decades? Maybe some will call me naive,
>but it's hard to image anyone needing more addressability than what 64
>bits offer...
The steps are not linear though .. 32 bit was 64k times more address
space than 16 bit.


Of course they aren't. Moore isn't either. Every bit doubles the
address space. Moore's "law" says that transistors (thus memory cells)
double every 18 months. Thus address bits are linear with time
(.67bits/year), if you believe Moore.


I don't - not to the extent of another 64 steps, anyway.

<snip>
Oh, your not a believer in Moore. Tsk, tsk.


Nope, I worked in the SC industry for 25 years. I met the man, once.
However it was a good heuristic for a while. There are no log growth
curves that go on forever .. infact Moore's law has just about run out.

<snip>
'Hello world' will probably run no faster .. maybe slower .. and will
almost certainly be a larger executable.


No reason for it to be larger at all. No reason for it to be slower
either.


Depends on the machine architecture .. something sufficiently optimised
for 64 bits may well run 32 or 16 or 8 bit code slower. There is also
likely to be some interesting new cr&p headers in the binary which says
'the following is 32 bit code'. If it isn't 32 bit code, then you can
assume that the instructions got longer, and everything is now 8-byte
aligned.

Go look at what a 'hello world' looks like now, vs the 8086 machine
code (.com) version, and tell me it ain't larger. (I'd allow as how it
is faster!)

--
GSV Three Minds in a Can
Outgoing Msgs are Turing Tested,and indistinguishable from human typing.
Nov 16 '05 #20
On Sat, 22 Jan 2005 08:32:32 +0100, Christoph Nahr wrote:
On Fri, 21 Jan 2005 19:58:57 -0500, Yousuf Khan <bb****@ezrs.com>
wrote:
Actually, I think the word size is always the same size, 16-bit. 32-bit
is called double word (dword), and 64-bit is called quadword (qword).


Yeah, it's become common usage to refer to 16 bits as a "word" but
originally the "word size" of a CPU means the width of its data and/or
address registers. The terminology kind of ossified in the 16-bit
days, hence the usage of "word" == 16 bits has stuck...


Only in the x86 world. In the world of 'z's and PPCs a "word" is still
32bits.

--
Keith
Nov 16 '05 #21
In comp.sys.ibm.pc.hardware.chips Yousuf Khan <bb****@ezrs.com> wrote:
Assembly language was a gas.


Still is. And gas is an assembler :)

-- Robert

Nov 16 '05 #22
On Fri, 21 Jan 2005 19:18:58 -0500, Yousuf Khan <bb****@ezrs.com> wrote:
Bruce Wood wrote:
A more interesting question is whether your 64-bit .NET application
will be able to call old 32-bit DLLs to do things, or vice versa:
whether your 32-bit .NET application will be able to call
64-bit-compiled DLLs to do things. I know that all 64-bit processors
have 32-bit emulators built in, so they'll "downshift" to run 32-bit
code, but I can't recall what was said about one type calling the
other. I'll leave that to wiser folk.


Well, actually the whole idea of DLL's is outdated in .NET isn't it? The
idea of .NET was to create a framework that is independent of
architecture (albeit mostly limited to Microsoft operating systems). So
a program once compiled doesn't care if its on a 32-bit processor or a
64-bit one, or even care if it's running on an x86-compatible processor
for that matter. There is no dependence on bittedness or instruction set.


Huh? They call that "compiled" nowadays?

--
Rgds, George Macdonald
Nov 16 '05 #23
On Sat, 22 Jan 2005 12:32:06 -0500, keith <kr*@att.bizzzz> wrote:
On Sat, 22 Jan 2005 08:32:32 +0100, Christoph Nahr wrote:
On Fri, 21 Jan 2005 19:58:57 -0500, Yousuf Khan <bb****@ezrs.com>
wrote:
Actually, I think the word size is always the same size, 16-bit. 32-bit
is called double word (dword), and 64-bit is called quadword (qword).


Yeah, it's become common usage to refer to 16 bits as a "word" but
originally the "word size" of a CPU means the width of its data and/or
address registers. The terminology kind of ossified in the 16-bit
days, hence the usage of "word" == 16 bits has stuck...


Only in the x86 world. In the world of 'z's and PPCs a "word" is still
32bits.


How much is this "16-bit word" definition due to M$'s pollution of the
computer vocabulary?... not sure how things stand in the Unix world at
present... but yes we've had computers with 16, 24, 32, 36, 60, 64 bit
words over the years that I've worked with. I've always thought of the
word size as the integer register width.

--
Rgds, George Macdonald
Nov 16 '05 #24
George Macdonald <fa********************@tellurian.com> writes:
How much is this "16-bit word" definition due to M$'s pollution of the
computer vocabulary?...
I don't think we can blaim this one on microsoft; If my memory serves
me right, Intel defined the 'word' as a 16 bit unit for the
assembler.
not sure how things stand in the Unix world at
present... but yes we've had computers with 16, 24, 32, 36, 60, 64 bit
words over the years that I've worked with. I've always thought of the
word size as the integer register width.


Yeah, but originally the 8086/8088 was a 16 bit CPU. The 80386
extended that to 32 bit (EAX and friends), and now there are 64 bit
versions as well. IMHO keeping the definition of a "word" fixed
regardless of the implementation of the architecture is the Right
Thing(tm) - otherwise a lot of programs would crash when recompiled
for 32/64 bit machines.

Regards,
Kai
--
Kai Harrekilde-Petersen <khp(at)harrekilde(dot)dk>
Nov 16 '05 #25
On Sat, 22 Jan 2005 23:51:41 +0100, Kai Harrekilde-Petersen
<kh*@harrekilde.dk> wrote:
George Macdonald <fa********************@tellurian.com> writes:
How much is this "16-bit word" definition due to M$'s pollution of the
computer vocabulary?...


I don't think we can blaim this one on microsoft; If my memory serves
me right, Intel defined the 'word' as a 16 bit unit for the
assembler.


Intel was not the first to build a computer with an extended
instruction/addressing/register set with some legacy backwards
compatibility.
not sure how things stand in the Unix world at
present... but yes we've had computers with 16, 24, 32, 36, 60, 64 bit
words over the years that I've worked with. I've always thought of the
word size as the integer register width.


Yeah, but originally the 8086/8088 was a 16 bit CPU. The 80386
extended that to 32 bit (EAX and friends), and now there are 64 bit
versions as well. IMHO keeping the definition of a "word" fixed
regardless of the implementation of the architecture is the Right
Thing(tm) - otherwise a lot of programs would crash when recompiled
for 32/64 bit machines.


"Implementation of the architecture" is the key here though and viewing all
the different x86s as a single entity is a gross error from my POV. For
the 80386, you simply needed a different compiler and linker from what was
used for 8088/86... just as you need a different compiler for AMD64/EM64T.
The fact that the instruction set sytax and mnemonics is familiar is
irrelevant - they are all really different computers.

--
Rgds, George Macdonald
Nov 16 '05 #26
On Sat, 22 Jan 2005 17:07:59 -0500, George Macdonald
<fa********************@tellurian.com> wrote:
On Fri, 21 Jan 2005 19:18:58 -0500, Yousuf Khan <bb****@ezrs.com> wrote:


Well, actually the whole idea of DLL's is outdated in .NET isn't it? The
idea of .NET was to create a framework that is independent of
architecture (albeit mostly limited to Microsoft operating systems). So
a program once compiled doesn't care if its on a 32-bit processor or a
64-bit one, or even care if it's running on an x86-compatible processor
for that matter. There is no dependence on bittedness or instruction set.


Huh? They call that "compiled" nowadays?


That language is at least as old as Pascal isn't it? One spoke of
compiling to p-code...no?

RM

Nov 16 '05 #27
> George Macdonald <fa********************@tellurian.com> wrote:
... we've had computers with 16, 24, 32, 36, 60, 64 bit
words over the years that I've worked with.
12- and 18-bit too, as I recall. And I worked with an ISA
whose direct address space was 19 bits.
I've always thought of the word size as the integer
register width.


Works for me, but ..

The world has generally agreed that a "byte" is 8 bits,
although not always, historically.

My impression is that "word" has never had an agreed
meaning beyond the pages of any particular ISA's manuals.
It's less meaningful than an audio amplifier "watt" was
back in the heady days before the FTC stepped in (not
that they actually fully resolved the matter).

Customer: "What does '64-bit' mean?"
Marketing Dude: "What would you like it to mean?"

--
Regards, Bob Niland mailto:na**@ispname.tld
http://www.access-one.com/rjn email4rjn AT yahoo DOT com
NOT speaking for any employer, client or Internet Service Provider.
Nov 16 '05 #28
On Sat, 22 Jan 2005 17:07:59 -0500, George Macdonald wrote:
On Fri, 21 Jan 2005 19:18:58 -0500, Yousuf Khan <bb****@ezrs.com> wrote:
Bruce Wood wrote:
A more interesting question is whether your 64-bit .NET application
will be able to call old 32-bit DLLs to do things, or vice versa:
whether your 32-bit .NET application will be able to call
64-bit-compiled DLLs to do things. I know that all 64-bit processors
have 32-bit emulators built in, so they'll "downshift" to run 32-bit
code, but I can't recall what was said about one type calling the
other. I'll leave that to wiser folk.


Well, actually the whole idea of DLL's is outdated in .NET isn't it? The
idea of .NET was to create a framework that is independent of
architecture (albeit mostly limited to Microsoft operating systems). So
a program once compiled doesn't care if its on a 32-bit processor or a
64-bit one, or even care if it's running on an x86-compatible processor
for that matter. There is no dependence on bittedness or instruction set.


Huh? They call that "compiled" nowadays?


Sure "they" do. Haven't you heard of a Java "compiler". DotNet is their
answer after being smacked shitless in court for trying to jijack Java.

--
Keith
Nov 16 '05 #29
On Sat, 22 Jan 2005 19:25:56 -0500, Robert Myers wrote:
On Sat, 22 Jan 2005 17:07:59 -0500, George Macdonald
<fa********************@tellurian.com> wrote:
On Fri, 21 Jan 2005 19:18:58 -0500, Yousuf Khan <bb****@ezrs.com> wrote:


Well, actually the whole idea of DLL's is outdated in .NET isn't it? The
idea of .NET was to create a framework that is independent of
architecture (albeit mostly limited to Microsoft operating systems). So
a program once compiled doesn't care if its on a 32-bit processor or a
64-bit one, or even care if it's running on an x86-compatible processor
for that matter. There is no dependence on bittedness or instruction set.


Huh? They call that "compiled" nowadays?


That language is at least as old as Pascal isn't it? One spoke of
compiling to p-code...no?


Not all Pascal compilers output P-code. Borland captured the market with
a real compiler and a workable development platform for *cheap*.

--
Keith
Nov 16 '05 #30
On Sat, 22 Jan 2005 17:41:36 -0500, George Macdonald wrote:
On Sat, 22 Jan 2005 12:32:06 -0500, keith <kr*@att.bizzzz> wrote:
On Sat, 22 Jan 2005 08:32:32 +0100, Christoph Nahr wrote:
On Fri, 21 Jan 2005 19:58:57 -0500, Yousuf Khan <bb****@ezrs.com>
wrote:

Actually, I think the word size is always the same size, 16-bit. 32-bit
is called double word (dword), and 64-bit is called quadword (qword).

Yeah, it's become common usage to refer to 16 bits as a "word" but
originally the "word size" of a CPU means the width of its data and/or
address registers. The terminology kind of ossified in the 16-bit
days, hence the usage of "word" == 16 bits has stuck...


Only in the x86 world. In the world of 'z's and PPCs a "word" is still
32bits.


How much is this "16-bit word" definition due to M$'s pollution of the
computer vocabulary?... not sure how things stand in the Unix world at
present... but yes we've had computers with 16, 24, 32, 36, 60, 64 bit
words over the years that I've worked with. I've always thought of the
word size as the integer register width.


That's the classical definition (as I've noted earlier in this thread).
I'm sure you've missed a bunch too. The fact is that anyone
assuming any results from size_of(word) is simply asking for a rude
awakening.

--
Keith

Nov 16 '05 #31
George Macdonald wrote:
Well, actually the whole idea of DLL's is outdated in .NET isn't it? The
idea of .NET was to create a framework that is independent of
architecture (albeit mostly limited to Microsoft operating systems). So
a program once compiled doesn't care if its on a 32-bit processor or a
64-bit one, or even care if it's running on an x86-compatible processor
for that matter. There is no dependence on bittedness or instruction set.

Huh? They call that "compiled" nowadays?


Well, it's compiled into a byte-code of some sort, just not machine
code. It's just like Java, only Microsoft-oriented.

Yousuf Khan
Nov 16 '05 #32
George Macdonald wrote:
On Sat, 22 Jan 2005 12:32:06 -0500, keith <kr*@att.bizzzz> wrote:
Only in the x86 world. In the world of 'z's and PPCs a "word" is still
32bits.

How much is this "16-bit word" definition due to M$'s pollution of the
computer vocabulary?... not sure how things stand in the Unix world at
present... but yes we've had computers with 16, 24, 32, 36, 60, 64 bit
words over the years that I've worked with. I've always thought of the
word size as the integer register width.


Well, we got the bits, the nibbles, the bytes, the words, etc. The first
three are completely standardized values (remember the nibble? It's
4-bits in case you don't). Then you got everything after the word is
nebulous, but thank god the didn't decide to create a new bit-size term
based around human language, like the clause or the sentence! We already
have the paragraph, and the page, and that's more than enough.

BTW, in the Unix world, these days they always preface /word/ with an
actual bit-size description, such as "32-bit word" or "64-bit word".

Yousuf Khan

Yousuf Khan
Nov 16 '05 #33
On Sun, 23 Jan 2005 03:22:47 -0500, Yousuf Khan <bb****@ezrs.com> wrote:
George Macdonald wrote:
Well, actually the whole idea of DLL's is outdated in .NET isn't it? The
idea of .NET was to create a framework that is independent of
architecture (albeit mostly limited to Microsoft operating systems). So
a program once compiled doesn't care if its on a 32-bit processor or a
64-bit one, or even care if it's running on an x86-compatible processor
for that matter. There is no dependence on bittedness or instruction set.

Huh? They call that "compiled" nowadays?


Well, it's compiled into a byte-code of some sort, just not machine
code. It's just like Java, only Microsoft-oriented.


It's just not real code and it's source is not real software.:-) This
abuse of blurring the difference is going too far. What's the point of
faster and faster processors if they just get burdened with more and more
indirection. Neither Java, nor any other language, *has* to produce
interpretive object code.

Such languages have their place and reasons for use -- from security to
laziness, or just toy application -- but to suggest that DLLs, which
already have the burden of symbolic runtime linkage, are now "outdated" is
scarey.

--
Rgds, George Macdonald
Nov 16 '05 #34
On Sat, 22 Jan 2005 19:25:56 -0500, Robert Myers <rm********@comcast.net>
wrote:
On Sat, 22 Jan 2005 17:07:59 -0500, George Macdonald
<fa********************@tellurian.com> wrote:
On Fri, 21 Jan 2005 19:18:58 -0500, Yousuf Khan <bb****@ezrs.com> wrote:


Well, actually the whole idea of DLL's is outdated in .NET isn't it? The
idea of .NET was to create a framework that is independent of
architecture (albeit mostly limited to Microsoft operating systems). So
a program once compiled doesn't care if its on a 32-bit processor or a
64-bit one, or even care if it's running on an x86-compatible processor
for that matter. There is no dependence on bittedness or instruction set.


Huh? They call that "compiled" nowadays?


That language is at least as old as Pascal isn't it? One spoke of
compiling to p-code...no?


Pseudo code and interpretive execution goes back much further than Pascal -
many proprietary languages existed as such. I've worked on a couple of
"compilers" which produced interpretive code myself and even the end user
knew the importance of the difference - IOW if they wanted to do real work,
then a p-code Pascal was the wrong choice... same with Basic. I guess I'm
objecting more to the notion that it can replace real machine code... i.e.
"whole idea of DLLs is outdated".

--
Rgds, George Macdonald
Nov 16 '05 #35
On Sun, 23 Jan 2005 03:37:04 -0500, Yousuf Khan <bb****@ezrs.com> wrote:
George Macdonald wrote:
On Sat, 22 Jan 2005 12:32:06 -0500, keith <kr*@att.bizzzz> wrote:
Only in the x86 world. In the world of 'z's and PPCs a "word" is still
32bits.

How much is this "16-bit word" definition due to M$'s pollution of the
computer vocabulary?... not sure how things stand in the Unix world at
present... but yes we've had computers with 16, 24, 32, 36, 60, 64 bit
words over the years that I've worked with. I've always thought of the
word size as the integer register width.


Well, we got the bits, the nibbles, the bytes, the words, etc. The first
three are completely standardized values (remember the nibble? It's
4-bits in case you don't). Then you got everything after the word is
nebulous, but thank god the didn't decide to create a new bit-size term
based around human language, like the clause or the sentence! We already
have the paragraph, and the page, and that's more than enough.


There was also the dibit, which I've never been sure how to pronunce:-) and
the "movement" to use octet instead of byte seems to be gaining strength,
especially in Europe (French revisionism ?:-))... remembering that the
first computers I used had 6-bit bytes. I don't recall what Univac called
their 9-bit field... "quarter-word"??
BTW, in the Unix world, these days they always preface /word/ with an
actual bit-size description, such as "32-bit word" or "64-bit word".


Which is how it should be... but I'd hope it doesn't use "word" for 16-bit
field on say an Athlon64.;-)

As I recall IBM introduced the concept of a variable sized word with the
System/360s but they have always been considered to have a 32-bit word size
- that's the size of the integer registers and the most efficient working
unit of integer data.

--
Rgds, George Macdonald
Nov 16 '05 #36
On Sun, 23 Jan 2005 03:37:04 -0500, Yousuf Khan wrote:
George Macdonald wrote:
On Sat, 22 Jan 2005 12:32:06 -0500, keith <kr*@att.bizzzz> wrote:
Only in the x86 world. In the world of 'z's and PPCs a "word" is still
32bits.

How much is this "16-bit word" definition due to M$'s pollution of the
computer vocabulary?... not sure how things stand in the Unix world at
present... but yes we've had computers with 16, 24, 32, 36, 60, 64 bit
words over the years that I've worked with. I've always thought of the
word size as the integer register width.


Well, we got the bits, the nibbles, the bytes, the words, etc. The first
three are completely standardized values (remember the nibble? It's
4-bits in case you don't).


Actually it's spelled "nybble". ;-) "Byte" does *not* mean 8-bits.
It's the size of a character. Just because character = 8bits for all
machines we care to remember doesn't change the meaning of "byte". The
correct term for an general eight-bit entity is "octet".

--
Keith
Nov 16 '05 #37
On Sun, 23 Jan 2005 08:24:20 -0500, George Macdonald
<fa********************@tellurian.com> wrote:
On Sat, 22 Jan 2005 19:25:56 -0500, Robert Myers <rm********@comcast.net>
wrote:
On Sat, 22 Jan 2005 17:07:59 -0500, George Macdonald
<fa********************@tellurian.com> wrote:
On Fri, 21 Jan 2005 19:18:58 -0500, Yousuf Khan <bb****@ezrs.com> wrote:


Well, actually the whole idea of DLL's is outdated in .NET isn't it? The
idea of .NET was to create a framework that is independent of
architecture (albeit mostly limited to Microsoft operating systems). So
a program once compiled doesn't care if its on a 32-bit processor or a
64-bit one, or even care if it's running on an x86-compatible processor
for that matter. There is no dependence on bittedness or instruction set.

Huh? They call that "compiled" nowadays?


That language is at least as old as Pascal isn't it? One spoke of
compiling to p-code...no?


Pseudo code and interpretive execution goes back much further than Pascal -
many proprietary languages existed as such. I've worked on a couple of
"compilers" which produced interpretive code myself and even the end user
knew the importance of the difference - IOW if they wanted to do real work,
then a p-code Pascal was the wrong choice... same with Basic. I guess I'm
objecting more to the notion that it can replace real machine code... i.e.
"whole idea of DLLs is outdated".


"The whole idea of DLLs is outdated" sounds really attractive. It's
also a train that's been coming down the track for a long time, if
it's the same idea as virtualized architecture.

I wouldn't include tokenized Basic source, but I guess there's a good
bit of old mainframe code running on a virtual machine. Anybody
venture a guess as to how much?

I've kind of lost track of the .NET thing. It's better than Java, I
gather, and there is an open source version, mono, which is attractive
enough for open source types to work under the proprietary gunsight of
Microsoft.

Big-endian, little-endian, 64-bit, 32-bit. Yuk. Bring on the virtual
machines.

Except for us number-cruching types, I guess, but more and more number
crunching takes place in an interpreted environment like matlab,
anyway.

RM
Nov 16 '05 #38
On Sun, 23 Jan 2005 11:10:09 -0500, Robert Myers wrote:
On Sun, 23 Jan 2005 08:24:20 -0500, George Macdonald
<fa********************@tellurian.com> wrote:
On Sat, 22 Jan 2005 19:25:56 -0500, Robert Myers <rm********@comcast.net>
wrote:
On Sat, 22 Jan 2005 17:07:59 -0500, George Macdonald
<fa********************@tellurian.com> wrote:

On Fri, 21 Jan 2005 19:18:58 -0500, Yousuf Khan <bb****@ezrs.com> wrote:
>
>Well, actually the whole idea of DLL's is outdated in .NET isn't it? The
>idea of .NET was to create a framework that is independent of
>architecture (albeit mostly limited to Microsoft operating systems). So
>a program once compiled doesn't care if its on a 32-bit processor or a
>64-bit one, or even care if it's running on an x86-compatible processor
>for that matter. There is no dependence on bittedness or instruction set.

Huh? They call that "compiled" nowadays?

That language is at least as old as Pascal isn't it? One spoke of
compiling to p-code...no?
Pseudo code and interpretive execution goes back much further than Pascal -
many proprietary languages existed as such. I've worked on a couple of
"compilers" which produced interpretive code myself and even the end user
knew the importance of the difference - IOW if they wanted to do real work,
then a p-code Pascal was the wrong choice... same with Basic. I guess I'm
objecting more to the notion that it can replace real machine code... i.e.
"whole idea of DLLs is outdated".


"The whole idea of DLLs is outdated" sounds really attractive. It's
also a train that's been coming down the track for a long time, if
it's the same idea as virtualized architecture.


Well, that's one way of getting rid of DLL-Hell.
I wouldn't include tokenized Basic source, but I guess there's a good
bit of old mainframe code running on a virtual machine. Anybody
venture a guess as to how much?
All of it? ...and not only the "old" stuff. Mainframes have been
virtualized for decades. ...though perhaps in a slightly different
meaning of "virtualized".

Looking at it another way, I'd propose that most modern processors
are virtualized, incuding x86. The P4/Athlon (and many before) don't
execute the x86 ISA natively, rather "interpret" it to a RISCish
processor.
I've kind of lost track of the .NET thing. It's better than Java, I
gather, and there is an open source version, mono, which is attractive
enough for open source types to work under the proprietary gunsight of
Microsoft.
I don't see it as "better" in any meaning of the word. Java's purpose in
life is to divorce the application from the processor and OS. I can't
see how .net is "better" at this. If platform independance isn't wanted,
why would anyone use Java?
Big-endian, little-endian, 64-bit, 32-bit. Yuk. Bring on the virtual
machines.
They are. You still have to decinde on a data format.
Except for us number-cruching types, I guess, but more and more number
crunching takes place in an interpreted environment like matlab, anyway.


--
Keith

Nov 16 '05 #39
Bitstring <pa***************************@att.bizzzz>, from the wonderful
person keith <kr*@att.bizzzz> said
<snip>
.. but yes we've had computers with 16, 24, 32, 36, 60, 64 bit
words over the years that I've worked with. I've always thought of the
word size as the integer register width.
That's the classical definition (as I've noted earlier in this thread).
I'm sure you've missed a bunch too.


ISTR PDPx's (7s? 15s?) had 12 bit words. Atlas/Titan mainframes were 48,
again IIRC .. it's heck of a long time ago. [No, please don't kick the
Mercury delay line memory tank .... Arrghhh.]
The fact is that anyone
assuming any results from size_of(word) is simply asking for a rude
awakening.


Indeed. Even sizeof(char) was not guaranteed on all machines. We
remember 5-track Flexowriters too. 8>.

--
GSV Three Minds in a Can
Outgoing Msgs are Turing Tested,and indistinguishable from human typing.
Nov 16 '05 #40
On Sun, 23 Jan 2005 11:59:54 -0500, keith <kr*@att.bizzzz> wrote:
On Sun, 23 Jan 2005 11:10:09 -0500, Robert Myers wrote:
I wouldn't include tokenized Basic source, but I guess there's a good
bit of old mainframe code running on a virtual machine. Anybody
venture a guess as to how much?


All of it? ...and not only the "old" stuff. Mainframes have been
virtualized for decades. ...though perhaps in a slightly different
meaning of "virtualized".

Looking at it another way, I'd propose that most modern processors
are virtualized, incuding x86. The P4/Athlon (and many before) don't
execute the x86 ISA natively, rather "interpret" it to a RISCish
processor.

I take your point, but including microcode stretches the notion of
virtualization too far on one end the way that including tokenized
Basic stretches it too far on the other. I'm too lazy to try to come
up with a bullet-proof definition, but there is a class of virtual
machines that could naturally be implemented in hardware but are
normally implemented in software: p-code, java byte-code, m-code, and
I would put executing 360 instructions on x86 in that class.
Interpreting of x86 to microcode is done in hardware, of course.
MSIL, the intermediate code for .NET, actually does compile to machine
code, apparently, and is not implemented on a virtual machine.

The term "virtualize" is pretty broad. One kind of virtualization,
the kind that vmware does or that I think Power5 servers do virtualize
the processor to its own instruction set, and I expect _that_ kind of
virtualization to become essentially universal for purposes of
security. You get the security and compartmentalization benefits of
that kind of virtualization for free when you do instruction
translation by running on a virtual machine in software.
I've kind of lost track of the .NET thing. It's better than Java, I
gather, and there is an open source version, mono, which is attractive
enough for open source types to work under the proprietary gunsight of
Microsoft.


I don't see it as "better" in any meaning of the word. Java's purpose in
life is to divorce the application from the processor and OS. I can't
see how .net is "better" at this. If platform independance isn't wanted,
why would anyone use Java?


I barely know Java, and c# not at all. c# is reputed to be nicer for
programming.

RM
Nov 16 '05 #41
George Macdonald wrote:
It's just not real code and it's source is not real software.:-) This
abuse of blurring the difference is going too far. What's the point of
faster and faster processors if they just get burdened with more and more
indirection. Neither Java, nor any other language, *has* to produce
interpretive object code.

Such languages have their place and reasons for use -- from security to
laziness, or just toy application -- but to suggest that DLLs, which
already have the burden of symbolic runtime linkage, are now "outdated" is
scarey.


Not sure why you're so married to the concept of DLLs, they had their
purpose a few years ago, they were much better than the static-linked
libraries they replaced because they only were brought into memory only
when they were needed, not all at once at the beginning. But now the
requirement is for code that isn't dependent on underlying processor
architecture, and we have JAVA and .NET. These aren't exactly the same
as the old fashioned interpretted code either, these ones are decoded
only once on the fly and then they exist cached as machine code while
they run.

Yousuf Khan
Nov 16 '05 #42
On Sun, 23 Jan 2005 16:59:25 -0500, Yousuf Khan <bb****@ezrs.com> wrote:
George Macdonald wrote:
It's just not real code and it's source is not real software.:-) This
abuse of blurring the difference is going too far. What's the point of
faster and faster processors if they just get burdened with more and more
indirection. Neither Java, nor any other language, *has* to produce
interpretive object code.

Such languages have their place and reasons for use -- from security to
laziness, or just toy application -- but to suggest that DLLs, which
already have the burden of symbolic runtime linkage, are now "outdated" is
scarey.


Not sure why you're so married to the concept of DLLs, they had their
purpose a few years ago, they were much better than the static-linked
libraries they replaced because they only were brought into memory only
when they were needed, not all at once at the beginning. But now the
requirement is for code that isn't dependent on underlying processor
architecture, and we have JAVA and .NET. These aren't exactly the same
as the old fashioned interpretted code either, these ones are decoded
only once on the fly and then they exist cached as machine code while
they run.


DLLs are just the way it's done with Windows - nothing to do with being
married to anything; DLLs only got out of hand because of the fluff burden.
What irks me is machine cycles being pissed away on the indirection of
pseudo code. To me any suggestion that you can do serious computing with
this stuff, and do away with real machine code for system level library
functions, is madness.

--
Rgds, George Macdonald
Nov 16 '05 #43
On Sun, 23 Jan 2005 15:47:09 -0500, Robert Myers wrote:
On Sun, 23 Jan 2005 11:59:54 -0500, keith <kr*@att.bizzzz> wrote:
On Sun, 23 Jan 2005 11:10:09 -0500, Robert Myers wrote:
I wouldn't include tokenized Basic source, but I guess there's a good
bit of old mainframe code running on a virtual machine. Anybody
venture a guess as to how much?


All of it? ...and not only the "old" stuff. Mainframes have been
virtualized for decades. ...though perhaps in a slightly different
meaning of "virtualized".

Looking at it another way, I'd propose that most modern processors
are virtualized, incuding x86. The P4/Athlon (and many before) don't
execute the x86 ISA natively, rather "interpret" it to a RISCish
processor.

I take your point, but including microcode stretches the notion of
virtualization too far on one end the way that including tokenized
Basic stretches it too far on the other. I'm too lazy to try to come
up with a bullet-proof definition,


I understand. It's impossible to catagorize such things because there is
such a continum of architectures that have been tried. However you are
pretty loosy-goosey with your term "virtual". Remember VM/360?

but there is a class of virtual
machines that could naturally be implemented in hardware but are
normally implemented in software: p-code, java byte-code, m-code, and I
would put executing 360 instructions on x86 in that class.
Ok, a better example of your class of "virtualization" would be the 68K on
PPC. I call that emulation, not virtualization. I call what VM/360,
and later, did "virtualization". The processor virtualized itself.

Ok, if you don't like microcode (what is your definitionof "microcode",
BTW) as a virtualizer, now it's your turn to tell me why you think
"emulation" is "virtualization". ;-)
Interpreting
of x86 to microcode is done in hardware, of course. MSIL, the
intermediate code for .NET, actually does compile to machine code,
apparently, and is not implemented on a virtual machine.
Ok, what would you call a Java byte-code machine?
The term "virtualize" is pretty broad.
Indeed, but it helps if we all get our terms defined if we're going
to talk about various hardware and feechurs.
One kind of virtualization, the
kind that vmware does or that I think Power5 servers do virtualize the
processor to its own instruction set, and I expect _that_ kind of
virtualization to become essentially universal for purposes of security.
Too bad x86 is soo late to that table. M$ wanted no part of that though.
This brand of virtualizatin would have put them out of business a decade
ago. BTW, I call the widget that allows this brand of "virtualization" a
"hypervisor" (funny, so does IBM ;-).
You get the security and compartmentalization benefits of that kind of
virtualization for free when you do instruction translation by running
on a virtual machine in software.


Free?
I've kind of lost track of the .NET thing. It's better than Java, I
gather, and there is an open source version, mono, which is attractive
enough for open source types to work under the proprietary gunsight of
Microsoft.


I don't see it as "better" in any meaning of the word. Java's purpose
in life is to divorce the application from the processor and OS. I
can't see how .net is "better" at this. If platform independance isn't
wanted, why would anyone use Java?

I barely know Java, and c# not at all. c# is reputed to be nicer for
programming.


Perhaps, if you want to be forever wedded to Billy.

--
Keith
Nov 16 '05 #44
George Macdonald wrote:
DLLs are just the way it's done with Windows - nothing to do with being married to anything; DLLs only got out of hand because of the fluff burden. What irks me is machine cycles being pissed away on the indirection of pseudo code. To me any suggestion that you can do serious computing with this stuff, and do away with real machine code for system level library functions, is madness.


Machine cycles aren't so precious anymore, the software side hasn't
kept up with the developments in the hardware side for quite some time
now. Now's as good a time as any to try out these indirection
techniques. It will more than likely help out in the future as it will
probably mean we're less tied down to one processor achitecture
anymore. Piss a couple of machine cycles for for machine independence?
Sure, sounds good to me.

Yousuf Khan

Nov 16 '05 #45
On 23 Jan 2005 22:10:24 -0800, "YKhan" <yj****@gmail.com> wrote:
George Macdonald wrote:
DLLs are just the way it's done with Windows - nothing to do with

being
married to anything; DLLs only got out of hand because of the fluff

burden.
What irks me is machine cycles being pissed away on the indirection

of
pseudo code. To me any suggestion that you can do serious computing

with
this stuff, and do away with real machine code for system level

library
functions, is madness.


Machine cycles aren't so precious anymore, the software side hasn't
kept up with the developments in the hardware side for quite some time
now. Now's as good a time as any to try out these indirection
techniques. It will more than likely help out in the future as it will
probably mean we're less tied down to one processor achitecture
anymore. Piss a couple of machine cycles for for machine independence?
Sure, sounds good to me.


But it's not a couple of machine cycles - it bogs the whole thing. If you
restrict it to a user interface, where the enduser is allowed to talk to
the system through this clunker you *might* be able to get away with it. I
stress *serious* work here - the core load of the "system" (OS + services +
app). This stuff has already been tried at various levels: from Alpha to
Transmeta... it doesn't work to err, satisfaction.

Given that we are at the wrong end of the exponential slope of hardware
scaling, machine cycles are likely to become more precious.:-)

--
Rgds, George Macdonald
Nov 16 '05 #46
On Sun, 23 Jan 2005 22:10:02 -0500, keith <kr*@att.bizzzz> wrote:
On Sun, 23 Jan 2005 15:47:09 -0500, Robert Myers wrote:

<snip>

Ok, a better example of your class of "virtualization" would be the 68K on
PPC. I call that emulation, not virtualization. I call what VM/360,
and later, did "virtualization". The processor virtualized itself.

Ok, if you don't like microcode (what is your definitionof "microcode",
BTW) as a virtualizer, now it's your turn to tell me why you think
"emulation" is "virtualization". ;-)


The definition game just isn't very much fun. Emulation is one
processor pretending to be another. Virtualization is when you pull
the "machine" interface loose from the hardware so that the machine
you are interacting with has state that is independent of the physical
hardware. That's why I don't want to call microcode virtualization.
Interpreting
of x86 to microcode is done in hardware, of course. MSIL, the
intermediate code for .NET, actually does compile to machine code,
apparently, and is not implemented on a virtual machine.


Ok, what would you call a Java byte-code machine?
The term "virtualize" is pretty broad.


Indeed, but it helps if we all get our terms defined if we're going
to talk about various hardware and feechurs.
One kind of virtualization, the
kind that vmware does or that I think Power5 servers do virtualize the
processor to its own instruction set, and I expect _that_ kind of
virtualization to become essentially universal for purposes of security.


Too bad x86 is soo late to that table. M$ wanted no part of that though.
This brand of virtualizatin would have put them out of business a decade
ago. BTW, I call the widget that allows this brand of "virtualization" a
"hypervisor" (funny, so does IBM ;-).

You may think that kind of virtualization should belong to IBM, and
you may be right, but I don't expect to see hypervisor used as
anything but a proprietary IBM marketing term.
You get the security and compartmentalization benefits of that kind of
virtualization for free when you do instruction translation by running
on a virtual machine in software.


Free?

The hard part is pulling the virtual processor loose from the
underlying hardware. Once the state of your "machine" is separate
from hardware, you can examine it, manipulate it, duplicate it, keep
it from being hijacked,...all without fear of unintentionally
interfering with the operation of the machine. If you're trying to
emulate one processor on another, the virtual processor is
automatically separated from the hardware.
I've kind of lost track of the .NET thing. It's better than Java, I
gather, and there is an open source version, mono, which is attractive
enough for open source types to work under the proprietary gunsight of
Microsoft.

I don't see it as "better" in any meaning of the word. Java's purpose
in life is to divorce the application from the processor and OS. I
can't see how .net is "better" at this. If platform independance isn't
wanted, why would anyone use Java?

I barely know Java, and c# not at all. c# is reputed to be nicer for
programming.


Perhaps, if you want to be forever wedded to Billy.


The long-term fate of Mega$loth will be interesting to watch. They
will accomplish the customer-in-legirons routine that IBM tried but
ultimately failed at? I'm doubting it, just like I'm doubting that
x86 is forever.

RM

Nov 16 '05 #47
"Yousuf Khan" <bb****@ezrs.com> wrote in
news:-M********************@rogers.com...
...
Not sure why you're so married to the concept of DLLs, they had their
purpose a few years ago, they were much better than the static-linked
libraries they replaced because they only were brought into memory only
when they were needed, not all at once at the beginning.
???
Windows always loads code when it's needed, doesn't make a difference if
it's in a DLL or not. Executable files (that's EXE's and DLL's) are
memory-mapped, and loaded to main memory on first access. Also, DLL's didn't
replace static libraries. Both concepts are commonly used in unmanaged
programs.
But now the requirement is for code that isn't dependent on underlying
processor architecture,
That requirement has been there for ages. In fact, it's one of the reasons
why high-level programming languages (like C) were created.
and we have JAVA and .NET. These aren't exactly the same as the old
fashioned interpretted code either, these ones are decoded only once on
the fly and then they exist cached as machine code while they run.


Note that this is not generally true for Java VMs. The Sun VM for example
interprets code in the beginning and later compiles code that's used
frequently, to reduce loading times (JITing something like Swing would be
overkill).

Niki
Nov 16 '05 #48
On Mon, 24 Jan 2005 17:28:00 +0100, "Niki Estner"
<ni*********@cube.net> wrote:
"Yousuf Khan" <bb****@ezrs.com> wrote in
news:-M********************@rogers.com...
...
But now the requirement is for code that isn't dependent on underlying
processor architecture,


That requirement has been there for ages. In fact, it's one of the reasons
why high-level programming languages (like C) were created.


Oh, us old Fortran programmers only wish. c, as it is commonly used,
is really a portable assembler. The hardware dependence is wedged in
with all kinds of incomprehensible header files and conditional
compilation. What universe do you live in that you never run into
header file weirdness that corresponds to a hardware dependency?

RM
Nov 16 '05 #49
On Tue, 25 Jan 2005 09:30:51 +0100, "Niki Estner"
<ni*********@cube.net> wrote:
"Robert Myers" <rm********@comcast.net> wrote in
news:73********************************@4ax.com.. .
On Mon, 24 Jan 2005 17:28:00 +0100, "Niki Estner"
<ni*********@cube.net> wrote:
"Yousuf Khan" <bb****@ezrs.com> wrote in
news:-M********************@rogers.com...
...


But now the requirement is for code that isn't dependent on underlying
processor architecture,

That requirement has been there for ages. In fact, it's one of the reasons
why high-level programming languages (like C) were created.


Oh, us old Fortran programmers only wish. c, as it is commonly used,
is really a portable assembler. The hardware dependence is wedged in
with all kinds of incomprehensible header files and conditional
compilation. What universe do you live in that you never run into
header file weirdness that corresponds to a hardware dependency?


I said the requirement was there, I didn't say it was fulfilled... The post
before sounded like this was a brand new wish, and Java/.Net were the first
ones trying to solve it. They weren't. And, they didn't. Ever tried to make
an AWT-Applet run on multiple Java VM's?


I don't do enough with Java to know if it is any improvement at all in
terms of portability and reusability. My take is that it isn't.

In theory, though, a virtual machine solves one class of portability
problems by presenting a consistent "hardware" interface, no matter
what the actual hardware. In practice, if Sun keeps mucking around
with the runtime environment, you hardly notice that advantage.

RM
Nov 16 '05 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.