By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
457,887 Members | 1,113 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 457,887 IT Pros & Developers. It's quick & easy.

Storage of object reference

P: n/a
Is an object reference stored as a 32 bit value?
In other words, is a simple read or write of an object reference an
atomic statement?
Nov 16 '07 #1
Share this Question
Share on Google+
13 Replies


P: n/a
On 2007-11-16 15:28:34 -0800, GeezerButler <ku******@gmail.comsaid:
Is an object reference stored as a 32 bit value?
In other words, is a simple read or write of an object reference an
atomic statement?
Yes.

Nov 16 '07 #2

P: n/a
Unless you are on a 64 bit machine, right?

"Peter Duniho" wrote:
On 2007-11-16 15:28:34 -0800, GeezerButler <ku******@gmail.comsaid:
Is an object reference stored as a 32 bit value?
In other words, is a simple read or write of an object reference an
atomic statement?

Yes.

Nov 17 '07 #3

P: n/a
On 2007-11-16 17:36:01 -0800, Family Tree Mike
<Fa************@discussions.microsoft.comsaid:
Unless you are on a 64 bit machine, right?
Well, a) even on a 64-bit platform, assigning a reference should be an
atomic operation, and b) AFAIK there's no 64-bit version of .NET yet,
so even on a 64-bit platform it's my understanding that .NET references
are still only 32-bits wide.

Pete

Nov 17 '07 #4

P: n/a
Peter Duniho wrote:
Well, a) even on a 64-bit platform, assigning a reference should be an
atomic operation, and b) AFAIK there's no 64-bit version of .NET yet, so
even on a 64-bit platform it's my understanding that .NET references are
still only 32-bits wide.
I thought .NET 2.0 and newer did support 64 bit ??

Arne
Nov 17 '07 #5

P: n/a
Family Tree Mike wrote:
"Peter Duniho" wrote:
>On 2007-11-16 15:28:34 -0800, GeezerButler <ku******@gmail.comsaid:
>>Is an object reference stored as a 32 bit value?
In other words, is a simple read or write of an object reference an
atomic statement?
Yes.
Unless you are on a 64 bit machine, right?
ECMA-335 says:

12.6.6 Atomic reads and writes
A conforming CLI shall guarantee that read and write access to properly
aligned memory locations no larger
than the native word size (the size of type native int) is atomic (see
§12.6.2) when all the write accesses to a
location are the same size.

A native int in the .NET sense is 64 bit on 64 bit Windows.

So I would conclude that a reference is atomic on 64 bit windows
as well.

Arne
Nov 17 '07 #6

P: n/a
Peter Duniho <Np*********@NnOwSlPiAnMk.comwrote:
On 2007-11-16 17:36:01 -0800, Family Tree Mike
<Fa************@discussions.microsoft.comsaid:
Unless you are on a 64 bit machine, right?

Well, a) even on a 64-bit platform, assigning a reference should be an
atomic operation, and b) AFAIK there's no 64-bit version of .NET yet,
so even on a 64-bit platform it's my understanding that .NET references
are still only 32-bits wide.
There's been a 64-bit CLR around for a while. On such a CLR, references
are 64 bits wide, but stores of references are still atomic.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Nov 17 '07 #7

P: n/a
On 2007-11-16 19:23:57 -0800, Jon Skeet [C# MVP] <sk***@pobox.comsaid:
There's been a 64-bit CLR around for a while. On such a CLR, references
are 64 bits wide, but stores of references are still atomic.
Ah, okay. I guess it's just that the .NET API itself isn't 64-bit
(objects no larger than 2GB, for example).

Nov 17 '07 #8

P: n/a
"Peter Duniho" <Np*********@NnOwSlPiAnMk.comwrote in message
news:2007111623075637709-NpOeStPeAdM@NnOwSlPiAnMkcom...
On 2007-11-16 19:23:57 -0800, Jon Skeet [C# MVP] <sk***@pobox.comsaid:
>There's been a 64-bit CLR around for a while. On such a CLR, references
are 64 bits wide, but stores of references are still atomic.

Ah, okay. I guess it's just that the .NET API itself isn't 64-bit
(objects no larger than 2GB, for example).
The .NET API is (32/64) bit agnostic, it's the CLR (V2 and up) , the JIT
knows about the bit-ness of underlying platform, that means that:
- MSIL (per default for C# generated) is JIT compiled to X64 or IA64 code at
run-time on 64-bit Windows, while it's JIT compiled to X86 on 32-bit
Windows.
- objects are laid-out by the CLR respecting specific 32/64bit
addressing/alignment requirements.
- object references (pointers in machine code) are 64 bit or 32bit on 64-bit
Windows, depending on the "platform" compiler switch.
- native int types (IntPtr) are 32 or 64 bit depending on the platform.
Note that the 2GB limit is something imposed by the CLR, however, nothing
stops you to create multiple 2GB objects on 64-bit Windows, something that
is not possible when running 32-bit.

Willy.
Nov 17 '07 #9

P: n/a
Peter Duniho wrote:
On 2007-11-16 19:23:57 -0800, Jon Skeet [C# MVP] <sk***@pobox.comsaid:
>There's been a 64-bit CLR around for a while. On such a CLR, references
are 64 bits wide, but stores of references are still atomic.

Ah, okay. I guess it's just that the .NET API itself isn't 64-bit
(objects no larger than 2GB, for example).
64 bit means that virtual addresses are 64 bit. In C# terms that
IntPtr in 64 bit.

64 bit does not mean that every data item is 64 bit. Apparently
..NET CLR even in 64 bit has a length field that is signed 32 bit.

I would expect that limitation to be lifted within the next 5 years. But
we will see.

Arne

Nov 17 '07 #10

P: n/a
On 2007-11-17 01:00:35 -0800, "Willy Denoyette [MVP]"
<wi*************@telenet.besaid:
The .NET API is (32/64) bit agnostic,
That's a matter of opinion, I think. Specifically, it's my opinion
that supporting 64-bit code is more than just the width of a pointer.

For example, I wouldn't call an API that uses 32-bit variables to
define lengths of arrays "32/64 bit agnostic". And I'm not aware of a
..NET version that you can use in a completely 64-bit way. There are
lots of places you'll run into where your limit is either 2^32 or 2^31
(depending on signed/unsigned, of course).
[...]
Note that the 2GB limit is something imposed by the CLR, however,
nothing stops you to create multiple 2GB objects on 64-bit Windows,
something that is not possible when running 32-bit.
I do appreciate that when running .NET on 64-bit Windows, there are
some advantages. But that doesn't mean you can write true 64-bit .NET
code.

Pete

Nov 17 '07 #11

P: n/a
"Peter Duniho" <Np*********@NnOwSlPiAnMk.comwrote in message
news:2007111710511675249-NpOeStPeAdM@NnOwSlPiAnMkcom...
On 2007-11-17 01:00:35 -0800, "Willy Denoyette [MVP]"
<wi*************@telenet.besaid:
>The .NET API is (32/64) bit agnostic,

That's a matter of opinion, I think. Specifically, it's my opinion that
supporting 64-bit code is more than just the width of a pointer.
This is not only about the width of a pointer. There are three kind of JIT
compilers in V2, producing 32-bit (X86 Instruction set) or 64-bit code (X64
or IA instruction set), the register set and the pointer width are 32 bit or
64 bit, the register set depth depends on the machine architecture. The CLR
and the run-time libraries (CRT plus system dll's) are 32 or 64 bit native
C/C++ compiled code. When I said .NET code is bit agnostic, I meant that
it's not the CIL., who determines the bitness, it's the JIT compiler (and
obviously the CLR) who does depending on the underlying platform.
Notice, that there is no such thing as a .NET process at run-time , what
runs is just another 32-bit or 64-bit Windows process.
For example, I wouldn't call an API that uses 32-bit variables to define
lengths of arrays "32/64 bit agnostic". And I'm not aware of a .NET
version that you can use in a completely 64-bit way. There are lots of
places you'll run into where your limit is either 2^32 or 2^31 (depending
on signed/unsigned, of course).
The size limit imposed on an object, or more specifically an array is 2GB
(Contiguous Bytes!), is not in the type system, it's purely artificial.
There is no such thing like a "variable" that restricts the size of an array
(or an object in general) to 2^31 or 2^32. The Array's RTT "length" field
is 32 bit or 64 bit(there is even the GetLongLenth method and LongLength
property), depending on the CLR (and JIT). Note also, that this field
denotes the number of *elements* in the array, so, even on 32-bit, an array
could possibly hold 2^32 * 8 (long[], double[]) 32GBytes and even more for
an array of reference types like strings for instance. However, the CLR team
decided to restrict the *memory* allocated by a single object on the GC heap
to 2GB, this restriction doesn't mean that your application is not a true
64-bit application.
Note that native frameworks and class libraries impose the same restrictions
on the size their containers, do you think that there is no restriction for
the size of a std::vector? well, std::vector (from the MS CRT) has a
"max_length" field of 32-bit, this for 32 and 64 bit, does that mean that
your 64-bit compiled C++ applications aren't true 64-bit?.
Similar restrictions are baked in the 64-bit OS, a lot of data structures
are limited to 2^32 or less, do you believe you can "malloc " or
"VirtualAlloc" 64^2 memory on 64 bit Windows (you can't allocate >500GB on
current 64-bit Windows) , does that mean that the OS is not a true 64 bit
OS?

>[...]
Note that the 2GB limit is something imposed by the CLR, however, nothing
stops you to create multiple 2GB objects on 64-bit Windows, something
that is not possible when running 32-bit.

I do appreciate that when running .NET on 64-bit Windows, there are some
advantages. But that doesn't mean you can write true 64-bit .NET code.
Sure you can, what stops you?

Willy.

Nov 18 '07 #12

P: n/a
On 2007-11-17 16:24:41 -0800, "Willy Denoyette [MVP]"
<wi*************@telenet.besaid:
[...]
The size limit imposed on an object, or more specifically an array is
2GB (Contiguous Bytes!), is not in the type system, it's purely
artificial. There is no such thing like a "variable" that restricts the
size of an array (or an object in general) to 2^31 or 2^32.
Of course there is.
The Array's RTT "length" field is 32 bit or 64 bit(there is even the
GetLongLenth method and LongLength property), depending on the CLR (and
JIT).
Array.Length is _defined_ in the .NET API as being a 32-bit value.
Even if you count the LongLength property, it's not true 64-bit support
since it will never return a value that actually requires the 64-bit
width of the value.

This is the sort of thing that I mean when I say that .NET doesn't
support 64-bit code. Array.LongLength is about as close 64-bit support
as .NET comes, and frankly even there it fails since the property never
actually can take advantage of more than 32 bits. The rest of .NET
doesn't even provide the illusion of having a 64-bit API. It's very
tightly tied to the 32-bit world.
Note also, that this field denotes the number of *elements* in the array,
Yes, I know. So what? My point is that there's more to being 64-bit
than just the pointer size, or for that matter the size of the object
being stored. In fact, that's what I wrote already.

The difference between Win16 and Win32 is much more than just the size
of a pointer. The same thing applies here.
so, even on 32-bit, an array could possibly hold 2^32 * 8 (long[],
double[]) 32GBytes and even more for an array of reference types like
strings for instance.
So what? You still can't have an array longer than a length defined by
32 bits.
However, the CLR team decided to restrict the *memory* allocated by a
single object on the GC heap to 2GB, this restriction doesn't mean that
your application is not a true 64-bit application.
That's only true if you use a very narrow definition of "true 64-bit
application". I don't happen to agree with your definition, nor is
your definition consistent with the use of "N-bit" to describe APIs
throughout the history of Windows (and before, for that matter).
Note that native frameworks and class libraries impose the same
restrictions on the size their containers, do you think that there is
no restriction for the size of a std::vector?
Why would you think that I do? Looks like you're going off on a tangent here.
well, std::vector (from the MS CRT) has a "max_length" field of
32-bit, this for 32 and 64 bit, does that mean that your 64-bit
compiled C++ applications aren't true 64-bit?.
Who cares? This isn't about whether the "compiled application" is a
"true 64-bit" application. This is about the API that .NET exposes.

And for sure, your std::vector example is an example of an API that IS
NOT 64-bit.
Similar restrictions are baked in the 64-bit OS, a lot of data
structures are limited to 2^32 or less, do you believe you can "malloc
" or "VirtualAlloc" 64^2 memory on 64 bit Windows (you can't allocate
500GB on current 64-bit Windows) , does that mean that the OS is not a
true 64 bit OS?
No, of course not. It's not even a decent straw man you're showing me here.

In a 64-bit API, the API allows the code to at least attempt to
allocate a full 2^64. The fact that you'll never succeed is
immaterial, just as the fact that you'd never succeed to allocate a
full 2^32 in Win32 was immaterial. The API itself supported a full 32
bits in the allocation methods, and a 64-bit API need only support a
full 64 bits in the allocation methods, whether or not one can actually
allocate an object that big.

You are getting confused between the API and the underlying mechanics.
You might as well say that whether an application is 64-bit, 32-bit, or
something even less (16-bit? 8-bit) depends on how much space is left
in the processes virtual address space. That would be just as silly as
what you're asserting above.
>I do appreciate that when running .NET on 64-bit Windows, there are
some advantages. But that doesn't mean you can write true 64-bit .NET
code.

Sure you can, what stops you?
The fact that nothing about .NET actually supports true 64-bit code.

Just because you can compile 32-bit application written to a 32-bit API
as a 64-bit executable doesn't make the API a 64-bit API, no matter how
wishfully you think it will.

Pete

Nov 18 '07 #13

P: n/a
Peter Duniho wrote:
This is the sort of thing that I mean when I say that .NET doesn't
support 64-bit code. Array.LongLength is about as close 64-bit support
as .NET comes,
Absolutely wrong. You can allocate multiple objects that are much
bigger than what you can in a 32 bit system.
and frankly even there it fails since the property never
actually can take advantage of more than 32 bits.
The current implementation does not. .NET 4.0 may !
The rest of .NET
doesn't even provide the illusion of having a 64-bit API. It's very
tightly tied to the 32-bit world.
The API is mostly 32/64 agnostic. They added 64 bit long variants to
most of the relevant API's.
>Note also, that this field denotes the number of *elements* in the array,

Yes, I know. So what? My point is that there's more to being 64-bit
than just the pointer size,
But there is not.

A 64 bit system is a system with a 64 bit virtual address space and
a 32 bit system is a system with a 32 bit virtual address space.

Physical address space does not count.

Register length does not count.

Longest length of operands in instruction set does not count.

And certain limitation in API's does certainly not count.

A 64 bit .NET app can address 64 bit address space (which is
more less equivalent to IntPtr being 64 bit) making it a
64 bit environment.

That the CLR does not allow you to allocate objects larger
than 2 GB in the heap is a limitation, but not related
to 32 versus 64 bit.
>However, the CLR team decided to restrict the *memory* allocated by a
single object on the GC heap to 2GB, this restriction doesn't mean
that your application is not a true 64-bit application.

That's only true if you use a very narrow definition of "true 64-bit
application". I don't happen to agree with your definition,
It is the common definition used.
>Note that native frameworks and class libraries impose the same
restrictions on the size their containers, do you think that there is
no restriction for the size of a std::vector?

Why would you think that I do? Looks like you're going off on a tangent
here.
> well, std::vector (from the MS CRT) has a "max_length" field of
32-bit, this for 32 and 64 bit, does that mean that your 64-bit
compiled C++ applications aren't true 64-bit?.

Who cares? This isn't about whether the "compiled application" is a
"true 64-bit" application. This is about the API that .NET exposes.

And for sure, your std::vector example is an example of an API that IS
NOT 64-bit.
If it takes 64 bit pointers it is a 64 bit API.
In a 64-bit API, the API allows the code to at least attempt to allocate
a full 2^64.
No.

A 64 bit allocation will return 64 bit pointers so that a 64 bit address
space can be utilized.

How big chunks you can allocate is not a bit question.

The fact that nothing about .NET actually supports true 64-bit code.

Just because you can compile 32-bit application written to a 32-bit API
as a 64-bit executable doesn't make the API a 64-bit API, no matter how
wishfully you think it will.
If the methods accept 64 bit addresses in and out it is 64 bit. Most
possible all og .NET API do that.

Keiner probleme.

Arne
Nov 18 '07 #14

This discussion thread is closed

Replies have been disabled for this discussion.