469,330 Members | 1,379 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,330 developers. It's quick & easy.

What does '64 bit' mean? Lame question, but hear me out :)

Ok, first of all, let's get the obvious stuff out of the way. I'm an idiot. So please indulge me for a moment. Consider it an act of "community service"....

What does "64bit" mean to your friendly neighborhood C# programmer? The standard answer I get from computer sales people is: "It means that the CPU can process 64 bits of data at a time instead of 32." Ok... I guess I *kind* of understand what that means at an intuitive level, but what does it mean in practice? Consider the following code:

long l=1;
for(int i=0;i<5;i++){
Console.WriteLine("Emo says "+l);
l+=1;
}

How would this code run differently on a 64 bit processor as opposed to a 32 bit processor? Will it run twice as fast since the instructions are processed "64 bits at a time"? Will the 64 bit (long) variable 'l' be incremented more efficiently since now it can be done in a single processor instruction?

Now I want to ask about memory. I think this is the one benefit of 64bit computing that I DO understand. In a 32bit system, a memory pointer can only address 2^32 worth of process memory versus 2^64 worth of memory (wow!) in a 64bit system. I can see how this would be a major advantage for databases like SQL Server which could easily allocate over 4gigs of memory -- but is this a real advantage for a typical C# application?

Finally, I want to ask about interoperability. If I compile a 32bit C# app, will the ADO.NET code that it contains be able to communicate with the 64bit version of SQL Server?
Thanks for helping a newbie,

Larry


Nov 16 '05
58 29703
On Mon, 24 Jan 2005 21:28:43 -0500, keith <kr*@att.bizzzz> wrote:
On Mon, 24 Jan 2005 08:17:27 -0500, Robert Myers wrote:

<snip>
Virtualization is when you pull
the "machine" interface loose from the hardware so that the machine
you are interacting with has state that is independent of the physical
hardware. That's why I don't want to call microcode virtualization.
I don't see how this definiton is different than "emulation". ...or
microcode, for that matter. I see it all as a different level of
indirection.


Microcode makes the machine state independent of the physical
hardware? Nah.

Emulation always vitualizes. Whether you want to say that a processor
that is virtualized by one means or another to multiple instances of
itself is emulating itself is a choice of language. I'd rather keep
emulation for circumstances where one processor is pretending to be
another. What about an x86 emulator running on x86? This is really
boring stuff to be spending time on.
What I call self-virtualization is interesting though (i.e. VM/370, and
such).


<snip>

The hard part is pulling the virtual processor loose from the underlying
hardware. Once the state of your "machine" is separate from hardware,
you can examine it, manipulate it, duplicate it, keep it from being
hijacked,...all without fear of unintentionally interfering with the
operation of the machine. If you're trying to emulate one processor on
another, the virtual processor is automatically separated from the
hardware.


I don't have any issue with you here. Though I can't alter the state of
a microcoded state machine from user space either, other than though the
architeced interface. Again, I don't see the big difference.

Would you consider Transmeta's processors "virtualized"? If not, why not?


The Transmeta (they are getting out of the business I hear) processor
is no more virtualized than x86 with microcode, at least from a user's
point of view. As far as I know, you can't get at any of the internal
hardware hooks. IBM's strategy (which I assume is going to be Intel's
strategy as well, maybe somebody can educate me) is to create hardware
hooks that a user with sufficient privilege can get at to facilitate
the illusion of separate processors.

<snip>

The long-term fate of Mega$loth will be interesting to watch. They will
accomplish the customer-in-legirons routine that IBM tried but
ultimately failed at? I'm doubting it, just like I'm doubting that x86
is forever.


I don't think anything is forever, but I do know that even S/360 is still
around and making much money. Wintel may be slain at some point, but I
don't pretend to know what will drive the nail. I don't think it's
anything we've yet seen, though I'd *love* to be proven wrong. OTOH, the
whole market may implode and I'd rather not see that, thoug M$ is trying
hard to piss off as many as possible.


I expect the entire programming model to change. Stream processors,
GPU's, network processors, packet processors in the place of
conventional microprocessors.

x86, s/360 forever? Of course. That huge pile of software would cost
alot of money to recreate. It may not even be possible without
causing the world economy to collapse.

RM

Nov 16 '05 #51
"Robert Myers" <rm********@comcast.net> wrote in
news:p8********************************@4ax.com...
On Tue, 25 Jan 2005 09:30:51 +0100, "Niki Estner"
<ni*********@cube.net> wrote:
"Robert Myers" <rm********@comcast.net> wrote in
news:73********************************@4ax.com. ..
On Mon, 24 Jan 2005 17:28:00 +0100, "Niki Estner"
<ni*********@cube.net> wrote:

"Yousuf Khan" <bb****@ezrs.com> wrote in
news:-M********************@rogers.com...
> ...
> But now the requirement is for code that isn't dependent on underlying
> processor architecture,

That requirement has been there for ages. In fact, it's one of the
reasons
why high-level programming languages (like C) were created.
Oh, us old Fortran programmers only wish. c, as it is commonly used,
is really a portable assembler. The hardware dependence is wedged in
with all kinds of incomprehensible header files and conditional
compilation. What universe do you live in that you never run into
header file weirdness that corresponds to a hardware dependency?


I said the requirement was there, I didn't say it was fulfilled... The
post
before sounded like this was a brand new wish, and Java/.Net were the
first
ones trying to solve it. They weren't. And, they didn't. Ever tried to
make
an AWT-Applet run on multiple Java VM's?


I don't do enough with Java to know if it is any improvement at all in
terms of portability and reusability. My take is that it isn't.

In theory, though, a virtual machine solves one class of portability
problems by presenting a consistent "hardware" interface, no matter
what the actual hardware. In practice, if Sun keeps mucking around
with the runtime environment, you hardly notice that advantage.


That's exactly what high level languages with standard libraries have tried
to do for years, too. Unfortunately, compiler/library builders don't
implement everything in the standards, which leads to headers containing
more #ifdef's than code, resp to applet classes containing 3 different
layout, one for each VM...

Niki
Nov 16 '05 #52
On Tue, 25 Jan 2005 08:20:28 -0500, Robert Myers wrote:
On Mon, 24 Jan 2005 21:28:43 -0500, keith <kr*@att.bizzzz> wrote:
On Mon, 24 Jan 2005 08:17:27 -0500, Robert Myers wrote:

<snip>
Virtualization is when you pull
the "machine" interface loose from the hardware so that the machine
you are interacting with has state that is independent of the physical
hardware. That's why I don't want to call microcode virtualization.


I don't see how this definiton is different than "emulation". ...or
microcode, for that matter. I see it all as a different level of
indirection.


Microcode makes the machine state independent of the physical
hardware? Nah.


That depends on your view of the "soul of the machine". To me, a hardware
type, this is indirection at least, and thus virtualization of the
hardware. To a soft-weenie, the ISA (indeed the language) is king, so
perhaps you have a different opinion. ;-)
Emulation always vitualizes. Whether you want to say that a processor
that is virtualized by one means or another to multiple instances of
itself is emulating itself is a choice of language.
Well, that's where we are. The semantic lines blur quickly as technology
progresses.
I'd rather keep
emulation for circumstances where one processor is pretending to be
another. What about an x86 emulator running on x86? This is really
boring stuff to be spending time on.
Ok, forget emulation (though again there isn't a thick-black line, IMO).
What about virtualization, which was the point, IIRC.

<much snippage>
Would you consider Transmeta's processors "virtualized"? If not, why
not?

The Transmeta (they are getting out of the business I hear)


Irrelevant. We wuz talking technology, not business (I wouldn't given you
a nickel for their business when they were an enigma).
processor is
no more virtualized than x86 with microcode, at least from a user's
point of view.
I'm not a user. I'm a hardware weenie. ;-)
As far as I know, you can't get at any of the internal hardware hooks.
Teh user cannot, no. That doesn't change the microarchitecture.
IBM's strategy (which I assume is going to be Intel's
strategy as well, maybe somebody can educate me) is to create hardware
hooks that a user with sufficient privilege can get at to facilitate the
illusion of separate processors.
Sure, to different degrees. VM/360 (et. al.) virtualized the entire
system such that you could re-virtualize it again under VM/360 (at least
it worked on the 370s). The PowerPCs add a layre of indirection
to "protected mode" by having a hypervisor state that lives above.
This is obviously a lesser "virtualiation".
<snip>

The long-term fate of Mega$loth will be interesting to watch. They
will accomplish the customer-in-legirons routine that IBM tried but
ultimately failed at? I'm doubting it, just like I'm doubting that
x86 is forever.
I don't think anything is forever, but I do know that even S/360 is
still around and making much money. Wintel may be slain at some point,
but I don't pretend to know what will drive the nail. I don't think
it's anything we've yet seen, though I'd *love* to be proven wrong.
OTOH, the whole market may implode and I'd rather not see that, thoug M$
is trying hard to piss off as many as possible.


I expect the entire programming model to change. Stream processors,
GPU's, network processors, packet processors in the place of
conventional microprocessors.


You love them "stream processors"! You really ought to get into
programming DSPs, though that would take you into the world of real
problems. ;-)
x86, s/360 forever? Of course. That huge pile of software would cost
alot of money to recreate. It may not even be possible without causing
the world economy to collapse.


Exactly the point I've been making here for *years*. I learned this
lesson 30 years ago with FS. I wonder if Intel has learned this
lesson yet! Good ideas are quite often found to be not so wonderful.

--
Keith

Nov 16 '05 #53
BTW) as a virtualizer, now it's your turn to tell me why you think
"emulation" is "virtualization". ;-)


A VM is an emulator for something that never actually existed.

Personally I think that VMs are a great idea because
(a) you can manufacture them on demand
(b) process isolation
(c) after a decade of settling, the mature VM can be realised with hardware.
(The antonuym for virtualise would be realise, I should think.)

It occurs to me that it shouldn't be too much of a challenge to build
hardware supporting VMs the way CPUs currently support processes. I'd be
suprised if such solutions aren't already IBM patents.
Nov 16 '05 #54
On Wed, 02 Feb 2005 12:37:29 +1000, Peter Wone wrote:
BTW) as a virtualizer, now it's your turn to tell me why you think
"emulation" is "virtualization". ;-)
A VM is an emulator for something that never actually existed.


Define "already existed". VM/360 virtualized the S/360 ISA, using the
S/360 ISA. The S/360 certainly existed before VM/360.

Power4/5 has a hypervisor layer that "virtualizes" the PPC pretected mode
environment, which certainly existed before the hypervisor was added to
the architecture.

Personally I think that VMs are a great idea because
It would help to know what you mean here.
(a) you can manufacture them on demand

You can manufacture "threads" on demand too. Some virtualization is
limited in the "on demand" arena too. PR/SM only allowed seven instances
on the 390 (15 later) images (whouch could be VM, with any number of OS
images under that).

(b) process isolation

NO question, done right. Not that getting a security rating is a small
feat here. Interprocess signaling is the big bugaboo.

(c) after a decade of settling, the mature VM can be realised with hardware. (The antonuym for virtualise would be realise, I should think.)
The fact is that the process is the opposite. Mature hardware is
virtualized to add functionality.
It occurs to me that it shouldn't be too much of a challenge to build
hardware supporting VMs the way CPUs currently support processes. I'd be
suprised if such solutions aren't already IBM patents.


Ginev that VM360 (CP67) is at *least* 35 years old, I'd guess you're on
the right track. There has been a *ton* of work in this area for much
time.

--
Keith

Nov 16 '05 #55
On Wed, 02 Feb 2005 12:37:29 +1000, Peter Wone wrote:
BTW) as a virtualizer, now it's your turn to tell me why you think
"emulation" is "virtualization". ;-)
A VM is an emulator for something that never actually existed.


Define "already existed". VM/360 virtualized the S/360 ISA, using the
S/360 ISA. The S/360 certainly existed before VM/360.

Power4/5 has a hypervisor layer that "virtualizes" the PPC pretected mode
environment, which certainly existed before the hypervisor was added to
the architecture.

Personally I think that VMs are a great idea because
It would help to know what you mean here.
(a) you can manufacture them on demand

You can manufacture "threads" on demand too. Some virtualization is
limited in the "on demand" arena too. PR/SM only allowed seven instances
on the 390 (15 later) images (whouch could be VM, with any number of OS
images under that).

(b) process isolation

NO question, done right. Not that getting a security rating is a small
feat here. Interprocess signaling is the big bugaboo.

(c) after a decade of settling, the mature VM can be realised with hardware. (The antonuym for virtualise would be realise, I should think.)
The fact is that the process is the opposite. Mature hardware is
virtualized to add functionality.
It occurs to me that it shouldn't be too much of a challenge to build
hardware supporting VMs the way CPUs currently support processes. I'd be
suprised if such solutions aren't already IBM patents.


Ginev that VM360 (CP67) is at *least* 35 years old, I'd guess you're on
the right track. There has been a *ton* of work in this area for much
time.

--
Keith

Nov 16 '05 #56
In comp.sys.ibm.pc.hardware.chips Peter Wone <pe****@wamoz.com> wrote:
BTW) as a virtualizer, now it's your turn to tell me why you think
"emulation" is "virtualization". ;-)


A VM is an emulator for something that never actually existed.

Personally I think that VMs are a great idea because
(a) you can manufacture them on demand
(b) process isolation
(c) after a decade of settling, the mature VM can be realised with hardware.
(The antonuym for virtualise would be realise, I should think.)

It occurs to me that it shouldn't be too much of a challenge to build
hardware supporting VMs the way CPUs currently support processes. I'd be
suprised if such solutions aren't already IBM patents.

AFAIK all of the modern IBM mainframes support LPAR (logical
partition) mode. This allows you to run multiple machine images
on one physical machine, LPAR's have been available on IBM's
mainframes since sometime in the 80's, IIRC.

Jerry
Nov 16 '05 #57
On Fri, 28 Jan 2005 21:26:18 -0500, keith <kr*@att.bizzzz> wrote:
On Tue, 25 Jan 2005 08:20:28 -0500, Robert Myers wrote:

<snip>

Microcode makes the machine state independent of the physical
hardware? Nah.


That depends on your view of the "soul of the machine". To me, a hardware
type, this is indirection at least, and thus virtualization of the
hardware. To a soft-weenie, the ISA (indeed the language) is king, so
perhaps you have a different opinion. ;-)


The difference in viewpoint is important, and I have a hard time
adopting your point of view. Hardware decisions are hard-coded. If
no user with any level of privilege can see the virtualization or
manipulate it, it might as well not exist. I'd be just as concerned
about the details of circuitry. It might be interesting to talk
about, but, like the weather, there wouldn't be much that I, or any
user, with or without screwdriver, could do about it. In the end,
though, an argument about language is just that--an argument about
language.

<snip>
..

I expect the entire programming model to change. Stream processors,
GPU's, network processors, packet processors in the place of
conventional microprocessors.


You love them "stream processors"!


Well, yes I do. I like paradigm shifts.

The history of computing is thickly sown with competing realities.

One reality is that every idea that could be thought of was thought of
in the first two decades if not the first decade of automatic
computation.

The other, apparently contradictory, reality is that nothing is
forever, including something so basic as the register and execution
unit model of computation.

The idea of having some central unit fetching instructions and data
and operating upon them is so etched into the minds of the community
that it is hard to visualize a future in which that is not the
dominant reality, but it is coming. The new reality will be that
packet processors operate upon packets and send them along. Already
there, of course, in terms of some network processing.

A packet processor can be stateless. Then you just need to be able to
intercept packets to introduce any level of indirection you care to.
Firewalls like IPTABLES, which can also be the soul of a router,
already do that.
You really ought to get into
programming DSPs, though that would take you into the world of real
problems. ;-)


I should probably take a harder look at what _is_ going on with DSP
right now. Thanks for the suggestion. ;-).
x86, s/360 forever? Of course. That huge pile of software would cost
alot of money to recreate. It may not even be possible without causing
the world economy to collapse.


Exactly the point I've been making here for *years*. I learned this
lesson 30 years ago with FS. I wonder if Intel has learned this
lesson yet! Good ideas are quite often found to be not so wonderful.


Oh, believe me Intel has taken the idea to heart. Their goal is to
get software written to a proprietary ISA. They _they_ will own a
piece of the world economy forever.

RM

Nov 16 '05 #58
> Oh, believe me Intel has taken the idea to heart. Their goal is to
get software written to a proprietary ISA. They _they_ will own a
piece of the world economy forever.


Too bad for them the x86 instructions are by far and large generated by code
translation process called compiling. The reason for sticking to the x86
based architechture isn't how the software is written but that the switch,
assuming everyone suddenly agrees that they want it, wouldn't be possible
overnight so very small number of, say, developers are even giving it any
serious thought.

For a software developer it isn't that relevant what arthichture they are
developing for as long as it is atleast big or little endian and that the
word size is multiple of 8 bits and encoding of numeric values is using
two's complement. When that is true (and even if it isn't for a lot of the
time) the sourcecode they write in Java, (ANSI) C, (ISO/IEC) C++, C#, VB,
Delphi, OCaml, ADA and other higher level languages be they functional,
procedural or what not, it is not a very large portion of the sourcecode
that couldn't be written to be platform agnostic.

Mostly, when writing code for a specific platform using specific API's
locking-on to specific hardware platforms occur due to circumastances.. say,
Java VM and .NET MSIL are steps to the direction of supporting more hardware
with "unified" (if can teke the plunge to misuse terminology slightly to
make a point) compiler frontend (and to a degree, backend).

When software is deployed using, say, C, the code has been statically
compiled to a specific platform, right? The x86 is a well spread such
platform, right? Even this platform has a wide diversity.. instance of this
platform (assuming atleast 32-bit implementation) has wide range of
fragmentation:

- x86 32 bit (assumed always supported)
- x86 32 bit pro (pentium pro specific extensions)
- x87 floating-point co-processor (optional)
- mmx
- sse (includes mmx)
- sse2 (includes mmx, sse)
- sse3 (includes sse2, sse, mmx)

And that is just from Intel Corp. AMD has their own extensions like 3dnow!
and plus versions of 3dnow and mmx, and lately the x86-64 aka. AMD64.

Phew! That's a mouthful, and now, if the software developer wants to support
these instruction sets strengths specificly, it means a lot of work.. either
different code is implemented in different dynamic library, or single
dynamic library or executable dynamically chooses which "codepath" can be
taken and chooses the optimal using some heuristic built-in into the
generated code, either automatically by the compiler (think Intel C++ 8.1 or
similiar) or choises made by the developer. This isn't anymore a trivial
amount of work.

If the compiler's frontend could store and transmit the immediate format
results, which then are used in the later phases of compilation (done in the
client/host computer) in the compiler backend it would be a step forward.
Java VM and .NET MSIL are attempts to get this right, aren't they? The
implementations aren't yet optimal, but they are on cases, efficient enough.
Say, Java VM novadays uses linear register allocation.. it is a tradeoff
between processing speed and efficiency of the generated code. Say, using
colored graph would produce atleast 30% more efficient code but it would be
a lot more expensive, therefore these on-client code generators are
compromises and employ a bag of tricks and tradeoffs, including Hotspot,
erm, technology in JVM.

There is, however, no question about it: storing and transmitting the
intermediate format to the client and having the compiler backend as
installable, upgradeable component in the client environment is a step
forward from the static compilation and transmitting platform specific
binaries - if nothing else, this is a logistically expensive practise as it
is, and let's throw a wild prediction on the air: the Power architechture
will gain populatity on the desktop and AMD64 and the EM64T version of it
will also gain ground, suddenly there will be a great pressure to support
all of these. What you predict software developers will do at that point?

Well, ofcourse, they will seek to go where the fence's is lowest. ;-o

At any rate, it doesn't take a genious to see that the x86 is already pretty
bloated platform as far as diversity goes (just look at the number of
extensions Intel themselves have introduced above!). After the task of
supporting the latest-and-greatest while also working on older systems is
done the pragmatic way, it isn't a big stretch to make things completely
virtualized in this regard. It's been happening for years, folks, and you
would be kidding yourselves by thinking that the trend will suddenly reverse
itself. x86 won't go and die all of a sudden or even that slowly fade out,
it'll just mean there will be more competition than just AMD vs. Intel vs.
"the rest of the x86 vendors"-- it just means that these folks, if they want
to bring out x86 based systems have to bring out more power more cheaply
than anyone else and that's where they been good at all along.

There still will be choise to make which OS you want to use, Windows or
Something Else. If, however, applications use something like .NET Platform
or Java it means the choise of OS isn't as critical anymore than it is now
(example, you want to play some game you must look which platforms are
supported ;-) But, say, some application is written for .NET, you will be
able to run it on, say, Linux. Ofcourse this assumes there is no non-managed
code used, which isn't that sure. Does MONO support non-managed code anyway,
because quite frankly I don't have a clue. (<- attention! IMPORTANT! This is
the chance to sum up this post! Use it! ;-)

Actually, when I think of it, I don't care.
Nov 16 '05 #59

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.