By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,889 Members | 1,358 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,889 IT Pros & Developers. It's quick & easy.

Memory Limit for Visual Studio 2005???

P: n/a
It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY
exception whenever memory allocated exceeds 256 MB. I have 1024 MB on my system
so I am not even out of physical RAM, much less virtual memory.

Are other people experiencing this same problem?

Dec 22 '06 #1
Share this Question
Share on Google+
81 Replies


P: n/a
Hello Peter,

Memmory is allocated by blocks 32/64 mb, so if your GC is fragmented FW may
not found free space to allocate aditional block, albeit you can have enough
memmory

POIt looks like System::Collections::Generic.List throws and
POOUT_OF_MEMORY exception whenever memory allocated exceeds 256 MB. I
POhave 1024 MB on my system so I am not even out of physical RAM, much
POless virtual memory.
PO>
POAre other people experiencing this same problem?
PO>
---
WBR,
Michael Nemtsev [C# MVP] :: blog: http://spaces.live.com/laflour

"The greatest danger for most of us is not that our aim is too high and we
miss it, but that it is too low and we reach it" (c) Michelangelo
Dec 22 '06 #2

P: n/a
I checked in task manager and my peek memory usage for VS2005 is 288,060K
but then I have 2Gigs on this system.

Regards,
John

"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RK*******************@newsfe12.phx...
It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY
exception whenever memory allocated exceeds 256 MB. I have 1024 MB on my
system so I am not even out of physical RAM, much less virtual memory.

Are other people experiencing this same problem?

Dec 22 '06 #3

P: n/a
Hi,

What kind of project is this?

IIRC a web app has a limit refering to the amount of memory it can consume
--
Ignacio Machin
machin AT laceupsolutions com

"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RK*******************@newsfe12.phx...
It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY
exception whenever memory allocated exceeds 256 MB. I have 1024 MB on my
system so I am not even out of physical RAM, much less virtual memory.

Are other people experiencing this same problem?

Dec 22 '06 #4

P: n/a
It may be a Visual Studio Express limitation

"Michael Nemtsev" <ne*****@msn.comwrote in message
news:a2***************************@msnews.microsof t.com...
Hello Peter,

Memmory is allocated by blocks 32/64 mb, so if your GC is fragmented FW may
not found free space to allocate aditional block, albeit you can have enough
memmory

POIt looks like System::Collections::Generic.List throws and
POOUT_OF_MEMORY exception whenever memory allocated exceeds 256 MB. I
POhave 1024 MB on my system so I am not even out of physical RAM, much
POless virtual memory.
POPOAre other people experiencing this same problem?
PO---
WBR,
Michael Nemtsev [C# MVP] :: blog: http://spaces.live.com/laflour

"The greatest danger for most of us is not that our aim is too high and we
miss it, but that it is too low and we reach it" (c) Michelangelo


Dec 22 '06 #5

P: n/a
Try and add one gig of Bytes to a List<Byteand see if it doesn't abnormally
terminate.

"John J. Hughes II" <in*****@nowhere.comwrote in message
news:Oy**************@TK2MSFTNGP04.phx.gbl...
>I checked in task manager and my peek memory usage for VS2005 is 288,060K but
then I have 2Gigs on this system.

Regards,
John

"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RK*******************@newsfe12.phx...
>It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY
exception whenever memory allocated exceeds 256 MB. I have 1024 MB on my
system so I am not even out of physical RAM, much less virtual memory.

Are other people experiencing this same problem?


Dec 22 '06 #6

P: n/a
Its a regular desktop windows project under Visual Studio 2005 Express.

"Ignacio Machin ( .NET/ C# MVP )" <machin TA laceupsolutions.comwrote in
message news:%2***************@TK2MSFTNGP06.phx.gbl...
Hi,

What kind of project is this?

IIRC a web app has a limit refering to the amount of memory it can consume
--
Ignacio Machin
machin AT laceupsolutions com

"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RK*******************@newsfe12.phx...
>It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY
exception whenever memory allocated exceeds 256 MB. I have 1024 MB on my
system so I am not even out of physical RAM, much less virtual memory.

Are other people experiencing this same problem?


Dec 22 '06 #7

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RK*******************@newsfe12.phx...
It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY exception
whenever memory allocated exceeds 256 MB. I have 1024 MB on my system so I am not even out
of physical RAM, much less virtual memory.

Are other people experiencing this same problem?
Why do you mention VS2005? This is thrown at program run-time, right? The run-time
limitation is 2GB for a CLR object type, that means that your List can only hold at most
2GB, however, due heap fragmentation of a 32 bit process, the maximum size of a single
object is limited to something from 1.2GB up to 1.8GB depending on the type of OS and
application. A simple console application should be able to allocate a single List of
~1.6GB or more , Windows Forms will top at ~1.2GB for the largest List (or array or
whatever).

Willy.


Dec 22 '06 #8

P: n/a

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RK*******************@newsfe12.phx...
>It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY
exception whenever memory allocated exceeds 256 MB. I have 1024 MB on my
system so I am not even out of physical RAM, much less virtual memory.

Are other people experiencing this same problem?

Why do you mention VS2005? This is thrown at program run-time, right? The
run-time limitation is 2GB for a CLR object type, that means that your List
can only hold at most 2GB, however, due heap fragmentation of a 32 bit
process, the maximum size of a single object is limited to something from
1.2GB up to 1.8GB depending on the type of OS and application. A simple
console application should be able to allocate a single List of ~1.6GB or more
, Windows Forms will top at ~1.2GB for the largest List (or array or
whatever).

Willy.
If you have actually tested this and thus know that it works empirically rather
than theoretically, then this must be a limitation of Visual Studio 2005
Express. I could not get Visual Studio 2005 Express to allocate more than 256 MB
without abnormally terminating.

I was very pleased with its relative performance to Native Code Visual C++ 6.0.
It was something like 50% faster on every test.
>

Dec 22 '06 #9

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:HP********************@newsfe07.phx...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl...
>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RK*******************@newsfe12.phx...
>>It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY exception
whenever memory allocated exceeds 256 MB. I have 1024 MB on my system so I am not even
out of physical RAM, much less virtual memory.

Are other people experiencing this same problem?

Why do you mention VS2005? This is thrown at program run-time, right? The run-time
limitation is 2GB for a CLR object type, that means that your List can only hold at most
2GB, however, due heap fragmentation of a 32 bit process, the maximum size of a single
object is limited to something from 1.2GB up to 1.8GB depending on the type of OS and
application. A simple console application should be able to allocate a single List of
~1.6GB or more , Windows Forms will top at ~1.2GB for the largest List (or array or
whatever).

Willy.

If you have actually tested this and thus know that it works empirically rather than
theoretically, then this must be a limitation of Visual Studio 2005 Express. I could not
get Visual Studio 2005 Express to allocate more than 256 MB without abnormally
terminating.

I was very pleased with its relative performance to Native Code Visual C++ 6.0. It was
something like 50% faster on every test.
No, once again this is run-time related VS does (and can't impose such restrictions) , try
to compile and run your code from the command-line prompt if you do't trust me.
Keep in mind that when you don't pre-allocate the List, you will end with a List doubling
it's size each time it get's filled to the max. capacity, so the OOM exception might get
thrown when no free *contiguous* block of ~512MB can be found in excess of the already
allocated ~256MB.

Willy.

Willy.
Dec 22 '06 #10

P: n/a

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:HP********************@newsfe07.phx...
>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl...
>>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RK*******************@newsfe12.phx...
It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY
exception whenever memory allocated exceeds 256 MB. I have 1024 MB on my
system so I am not even out of physical RAM, much less virtual memory.

Are other people experiencing this same problem?


Why do you mention VS2005? This is thrown at program run-time, right? The
run-time limitation is 2GB for a CLR object type, that means that your List
can only hold at most 2GB, however, due heap fragmentation of a 32 bit
process, the maximum size of a single object is limited to something from
1.2GB up to 1.8GB depending on the type of OS and application. A simple
console application should be able to allocate a single List of ~1.6GB or
more , Windows Forms will top at ~1.2GB for the largest List (or array or
whatever).

Willy.

If you have actually tested this and thus know that it works empirically
rather than theoretically, then this must be a limitation of Visual Studio
2005 Express. I could not get Visual Studio 2005 Express to allocate more
than 256 MB without abnormally terminating.

I was very pleased with its relative performance to Native Code Visual C++
6.0. It was something like 50% faster on every test.

No, once again this is run-time related VS does (and can't impose such
restrictions) , try to compile and run your code from the command-line prompt
if you do't trust me.
Keep in mind that when you don't pre-allocate the List, you will end with a
List doubling it's size each time it get's filled to the max. capacity, so the
OOM exception might get thrown when no free *contiguous* block of ~512MB can
be found in excess of the already allocated ~256MB.

Willy.

Willy.

It works fine for native code std::vector, yet does not work for either managed
std::vector or List<Byte. The command line version failed to compile.
Dec 22 '06 #11

P: n/a
Peter Olcott Wrote:

POIt looks like System::Collections::Generic.List throws and
POOUT_OF_MEMORY exception whenever memory allocated exceeds 256 MB. I
POhave 1024 MB on my system so I am not even out of physical RAM, much
POless virtual memory.

I was curious if there was a limt there, so I wrote this:

private void button1_Click(object sender, EventArgs e)
{
int bytesPerArray = 1024 * 32; // 32k per byte. No large object heap
interaction.
long totalBytes = 0;
List<byte[]bytes = new List<byte[]>();
while (true)
{
byte[] b = new byte[bytesPerArray];
bytes.Add(b);
totalBytes += bytesPerArray;
System.Diagnostics.Debug.WriteLine("Total: " +
totalBytes.ToString());
}
}

I compiled this as an x86 application, and ran it.

In my output window, the last things in there was:
Total: 1704427520
A first chance exception of type 'System.OutOfMemoryException'
occurred in WindowsApplication2.exe

This is exactly what I expected to see, as I know that each 32-bit Windows
Process gets 4GB of virual memory space, and that 4GB is split in half,
giving 2GB to user code to play with. The managed heap can typically (if
it's not fragmented) get up to 1.5-1.7 gigabytes before issues arise.

If I compile this for x64 (or leave it as Any CPU), the limites are much
higher.

My machine does have 4GB of memory on it, but I've seen these exact same
results on machines with far less memory. (I build very big, very scalable,
applications all day long... and running into memory limitiations in 32-bit
land was a big problem I had for years).

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins
Dec 22 '06 #12

P: n/a
"Peter Olcott" Wrote:
Try and add one gig of Bytes to a List<Byteand see if it doesn't
abnormally terminate.
Well, I did just that (got to 1.7GB) and it ran fine as an x86 application.
I was allocating 32KB byte chunks though.

This is a VERY different use case from allocating a single 1GB buffer. I
tried to allocate a single 1GB array:
private void button2_Click(object sender, EventArgs e)
{
List<byte[]bb = new List<byte[]>();
int GB = 1024 * 1024 * 1024;
byte[] b = new byte[GB];
bb.Add(b);
MessageBox.Show("Allocated 1 GB");
}

.... and this immediatly threw an OOM when I compiled it under x86.
It did run perfectly (and instantly) when compiled and run under x64.

Not respective of platform, if you're really allocating memory in chunks
this big, you need to rethink your algorithm.

Even in x64 land, holding onto chunks in the heap this big would be scary.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins
Dec 22 '06 #13

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote
[256 meg limit for List<>]
If you have actually tested this and thus know that it works empirically
rather than theoretically, then this must be a limitation of Visual Studio
2005 Express. I could not get Visual Studio 2005 Express to allocate more
than 256 MB without abnormally terminating.
I have explicitly tested this, and it works fine.

I've been writing .Net applications that max out .Net Memory Management for
years now, and know it to work fine.

You keep missing the fact that VS2005 isn't a compiler. It's using the
standard CSC compiler that comes with the .Net framework. Even this isn't
really a compiler, as it only spits out IL.

The Jitter is what "compiles" that IL to native code and executes it. How
you generated that IL doesn't matter. The jitter also comes with the .Net
framework, and isn't a part of VS2005 Express.
I was very pleased with its relative performance to Native Code Visual C++
6.0. It was something like 50% faster on every test.
Lik everything else, it depends on your use cases. In my experience, the
performance different between native C++ and managed C# has always been
"close enough" to side with C# due to the increase in developer
productivity.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins
Dec 22 '06 #14

P: n/a

"Chris Mullins" <cm******@yahoo.comwrote in message
news:eL**************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" Wrote:
>Try and add one gig of Bytes to a List<Byteand see if it doesn't abnormally
terminate.

Well, I did just that (got to 1.7GB) and it ran fine as an x86 application. I
was allocating 32KB byte chunks though.

This is a VERY different use case from allocating a single 1GB buffer. I tried
to allocate a single 1GB array:
private void button2_Click(object sender, EventArgs e)
{
List<byte[]bb = new List<byte[]>();
int GB = 1024 * 1024 * 1024;
byte[] b = new byte[GB];
bb.Add(b);
MessageBox.Show("Allocated 1 GB");
}

... and this immediatly threw an OOM when I compiled it under x86.
It did run perfectly (and instantly) when compiled and run under x64.

Not respective of platform, if you're really allocating memory in chunks this
big, you need to rethink your algorithm.
Would you think that 18,000 hours would be enough thought? That's how much I
have into it.
>
Even in x64 land, holding onto chunks in the heap this big would be scary.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins


Dec 22 '06 #15

P: n/a

"Chris Mullins" <cm******@yahoo.comwrote in message
news:Oa**************@TK2MSFTNGP03.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote
[256 meg limit for List<>]
>If you have actually tested this and thus know that it works empirically
rather than theoretically, then this must be a limitation of Visual Studio
2005 Express. I could not get Visual Studio 2005 Express to allocate more
than 256 MB without abnormally terminating.

I have explicitly tested this, and it works fine.

I've been writing .Net applications that max out .Net Memory Management for
years now, and know it to work fine.

You keep missing the fact that VS2005 isn't a compiler. It's using the
standard CSC compiler that comes with the .Net framework. Even this isn't
really a compiler, as it only spits out IL.
I have written a couple of compilers, and the jitter is not a compiler. The
tricky part is translating the nested do-while and if-then-else statements
comprised of compounds relational expressions into jump code. The jitter does
not need to do this, this part is already done. All the jitter has to do is to
translate pseudo assembly language into machine code.
>
The Jitter is what "compiles" that IL to native code and executes it. How you
generated that IL doesn't matter. The jitter also comes with the .Net
framework, and isn't a part of VS2005 Express.
>I was very pleased with its relative performance to Native Code Visual C++
6.0. It was something like 50% faster on every test.

Lik everything else, it depends on your use cases. In my experience, the
performance different between native C++ and managed C# has always been "close
enough" to side with C# due to the increase in developer productivity.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins


Dec 22 '06 #16

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RH********************@newsfe07.phx...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl...
>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:HP********************@newsfe07.phx...
>>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl.. .
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RK*******************@newsfe12.phx...
It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY exception
whenever memory allocated exceeds 256 MB. I have 1024 MB on my system so I am not even
out of physical RAM, much less virtual memory.
>
Are other people experiencing this same problem?
>
>
>

Why do you mention VS2005? This is thrown at program run-time, right? The run-time
limitation is 2GB for a CLR object type, that means that your List can only hold at
most 2GB, however, due heap fragmentation of a 32 bit process, the maximum size of a
single object is limited to something from 1.2GB up to 1.8GB depending on the type of
OS and application. A simple console application should be able to allocate a single
List of ~1.6GB or more , Windows Forms will top at ~1.2GB for the largest List (or
array or whatever).

Willy.

If you have actually tested this and thus know that it works empirically rather than
theoretically, then this must be a limitation of Visual Studio 2005 Express. I could not
get Visual Studio 2005 Express to allocate more than 256 MB without abnormally
terminating.

I was very pleased with its relative performance to Native Code Visual C++ 6.0. It was
something like 50% faster on every test.

No, once again this is run-time related VS does (and can't impose such restrictions) ,
try to compile and run your code from the command-line prompt if you do't trust me.
Keep in mind that when you don't pre-allocate the List, you will end with a List doubling
it's size each time it get's filled to the max. capacity, so the OOM exception might get
thrown when no free *contiguous* block of ~512MB can be found in excess of the already
allocated ~256MB.

Willy.

Willy.


It works fine for native code std::vector, yet does not work for either managed
std::vector or List<Byte. The command line version failed to compile.

There is no such thing like MANAGED std::vector, and the program should compile just fine
from the command line. if it compiles from VS. Both VS and the csc.exe are both driving the
same compiler.

Willy.
Dec 22 '06 #17

P: n/a
Try this simpler case:
uint SIZE = 0x3FFFFFFF; // 1024 MB

List<uintTemp;
for (uint N = 0; N < SIZE; N++)
Temp.Add(N);

Mine now bombs out on just short of half my memory. I am guessing that it runs
out of actual RAM when it doubles the size on the next reallocation. Unlike the
native code compiler, the managed code compiler must have actual RAM, virtual
memory will not work.

"Chris Mullins" <cm******@yahoo.comwrote in message
news:eW**************@TK2MSFTNGP06.phx.gbl...
Peter Olcott Wrote:

POIt looks like System::Collections::Generic.List throws and
POOUT_OF_MEMORY exception whenever memory allocated exceeds 256 MB. I
POhave 1024 MB on my system so I am not even out of physical RAM, much
POless virtual memory.

I was curious if there was a limt there, so I wrote this:

private void button1_Click(object sender, EventArgs e)
{
int bytesPerArray = 1024 * 32; // 32k per byte. No large object heap
interaction.
long totalBytes = 0;
List<byte[]bytes = new List<byte[]>();
while (true)
{
byte[] b = new byte[bytesPerArray];
bytes.Add(b);
totalBytes += bytesPerArray;
System.Diagnostics.Debug.WriteLine("Total: " + totalBytes.ToString());
}
}

I compiled this as an x86 application, and ran it.

In my output window, the last things in there was:
Total: 1704427520
A first chance exception of type 'System.OutOfMemoryException'
occurred in WindowsApplication2.exe

This is exactly what I expected to see, as I know that each 32-bit Windows
Process gets 4GB of virual memory space, and that 4GB is split in half, giving
2GB to user code to play with. The managed heap can typically (if it's not
fragmented) get up to 1.5-1.7 gigabytes before issues arise.

If I compile this for x64 (or leave it as Any CPU), the limites are much
higher.

My machine does have 4GB of memory on it, but I've seen these exact same
results on machines with far less memory. (I build very big, very scalable,
applications all day long... and running into memory limitiations in 32-bit
land was a big problem I had for years).

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins


Dec 22 '06 #18

P: n/a

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:eO**************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RH********************@newsfe07.phx...
>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl...
>>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:HP********************@newsfe07.phx...

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl. ..
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RK*******************@newsfe12.phx...
>It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY
>exception whenever memory allocated exceeds 256 MB. I have 1024 MB on my
>system so I am not even out of physical RAM, much less virtual memory.
>>
>Are other people experiencing this same problem?
>>
>>
>>
>
Why do you mention VS2005? This is thrown at program run-time, right? The
run-time limitation is 2GB for a CLR object type, that means that your
List can only hold at most 2GB, however, due heap fragmentation of a 32
bit process, the maximum size of a single object is limited to something
from 1.2GB up to 1.8GB depending on the type of OS and application. A
simple console application should be able to allocate a single List of
~1.6GB or more , Windows Forms will top at ~1.2GB for the largest List (or
array or whatever).
>
Willy.

If you have actually tested this and thus know that it works empirically
rather than theoretically, then this must be a limitation of Visual Studio
2005 Express. I could not get Visual Studio 2005 Express to allocate more
than 256 MB without abnormally terminating.

I was very pleased with its relative performance to Native Code Visual C++
6.0. It was something like 50% faster on every test.

No, once again this is run-time related VS does (and can't impose such
restrictions) , try to compile and run your code from the command-line
prompt if you do't trust me.
Keep in mind that when you don't pre-allocate the List, you will end with a
List doubling it's size each time it get's filled to the max. capacity, so
the OOM exception might get thrown when no free *contiguous* block of ~512MB
can be found in excess of the already allocated ~256MB.

Willy.

Willy.


It works fine for native code std::vector, yet does not work for either
managed std::vector or List<Byte. The command line version failed to
compile.


There is no such thing like MANAGED std::vector,
Yes, there is just recently. This saves old C++ programmers like me alot of
learning curve switching to .NET.
and the program should compile just fine from the command line. if it compiles
from VS. Both VS and the csc.exe are both driving the same compiler.

Willy.


Dec 22 '06 #19

P: n/a
"Chris Mullins" <cm******@yahoo.comwrote in message
news:eL**************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" Wrote:
>Try and add one gig of Bytes to a List<Byteand see if it doesn't abnormally terminate.

Well, I did just that (got to 1.7GB) and it ran fine as an x86 application. I was
allocating 32KB byte chunks though.

This is a VERY different use case from allocating a single 1GB buffer. I tried to allocate
a single 1GB array:
private void button2_Click(object sender, EventArgs e)
{
List<byte[]bb = new List<byte[]>();
int GB = 1024 * 1024 * 1024;
byte[] b = new byte[GB];
bb.Add(b);
MessageBox.Show("Allocated 1 GB");
}

... and this immediatly threw an OOM when I compiled it under x86.
It did run perfectly (and instantly) when compiled and run under x64.

Not respective of platform, if you're really allocating memory in chunks this big, you
need to rethink your algorithm.

Even in x64 land, holding onto chunks in the heap this big would be scary.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins


Windows forms needs some native dll's that are loaded by the OS loader at an address which
further fragments the process heap, and because the GC heap allocates from the process heap,
it cannot allocate a contiguous area larger than the largest process heap fragment. The net
result is that the most simple Windows programs (native and managed) cannot allocate more
than ~1.2 GB or less , depending on OS version SP and OS language version.

Willy.

Dec 22 '06 #20

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:Cn********************@newsfe06.phx...
Try this simpler case:
uint SIZE = 0x3FFFFFFF; // 1024 MB

List<uintTemp;
for (uint N = 0; N < SIZE; N++)
Temp.Add(N);

Mine now bombs out on just short of half my memory. I am guessing that it runs out of
actual RAM when it doubles the size on the next reallocation. Unlike the native code
compiler, the managed code compiler must have actual RAM, virtual memory will not work.
3FFFFFFF is 1GB!!!! More, your List hold uint's that is 4 byes per entry, that means that in
your sample you are trying to allocate 4GB!. It's obvious that this will fail.

And as I said in another reply, you have to pre-allocate the List, otherwise you'll need
much more free CONTIGUOUS memory than 1GB.
A list that isn't pre-allocated, starts with a 32 byte array as back-end store, this array
is extended each time it overflows. Extending means reserving another blob from the heap
with a new size = (original size * 2.
The finale result is that a 512MB List will extend to 1GB when it expands, but that means
also that a new blob of 1GB must be found before the old block can be copied to the new
blob (List).

This should probably work...
List<byteTemp = new List<byte>(1024*1024*1024);

Willy.

Dec 22 '06 #21

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:kp********************@newsfe06.phx...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:eO**************@TK2MSFTNGP04.phx.gbl...
>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RH********************@newsfe07.phx...
>>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl.. .
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:HP********************@newsfe07.phx...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl.. .
>"Peter Olcott" <No****@SeeScreen.comwrote in message
>news:RK*******************@newsfe12.phx...
>>It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY exception
>>whenever memory allocated exceeds 256 MB. I have 1024 MB on my system so I am not
>>even out of physical RAM, much less virtual memory.
>>>
>>Are other people experiencing this same problem?
>>>
>>>
>>>
>>
>Why do you mention VS2005? This is thrown at program run-time, right? The run-time
>limitation is 2GB for a CLR object type, that means that your List can only hold at
>most 2GB, however, due heap fragmentation of a 32 bit process, the maximum size of a
>single object is limited to something from 1.2GB up to 1.8GB depending on the type of
>OS and application. A simple console application should be able to allocate a single
>List of ~1.6GB or more , Windows Forms will top at ~1.2GB for the largest List (or
>array or whatever).
>>
>Willy.
>
If you have actually tested this and thus know that it works empirically rather than
theoretically, then this must be a limitation of Visual Studio 2005 Express. I could
not get Visual Studio 2005 Express to allocate more than 256 MB without abnormally
terminating.
>
I was very pleased with its relative performance to Native Code Visual C++ 6.0. It was
something like 50% faster on every test.

No, once again this is run-time related VS does (and can't impose such restrictions) ,
try to compile and run your code from the command-line prompt if you do't trust me.
Keep in mind that when you don't pre-allocate the List, you will end with a List
doubling it's size each time it get's filled to the max. capacity, so the OOM exception
might get thrown when no free *contiguous* block of ~512MB can be found in excess of
the already allocated ~256MB.

Willy.

Willy.

It works fine for native code std::vector, yet does not work for either managed
std::vector or List<Byte. The command line version failed to compile.


There is no such thing like MANAGED std::vector,

Yes, there is just recently. This saves old C++ programmers like me alot of learning curve
switching to .NET.
No, there is not, the managed "std" like templates are currently in a "closed" pre-beta
stage and are scheduled for public release after the ORCAS release (another wait for at
least one year), and will not belong to the std namespace, that is they will not be named
std::vector.

Willy.

Dec 22 '06 #22

P: n/a

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:OY**************@TK2MSFTNGP06.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:Cn********************@newsfe06.phx...
>Try this simpler case:
uint SIZE = 0x3FFFFFFF; // 1024 MB

List<uintTemp;
for (uint N = 0; N < SIZE; N++)
Temp.Add(N);

Mine now bombs out on just short of half my memory. I am guessing that it
runs out of actual RAM when it doubles the size on the next reallocation.
Unlike the native code compiler, the managed code compiler must have actual
RAM, virtual memory will not work.

3FFFFFFF is 1GB!!!! More, your List hold uint's that is 4 byes per entry, that
means that in your sample you are trying to allocate 4GB!. It's obvious that
this will fail.
I forgot to divide by four, make that 0xFFFFFFF; // One MB of uint
This should work on a system with 2GB of RAM.
>
And as I said in another reply, you have to pre-allocate the List, otherwise
you'll need much more free CONTIGUOUS memory than 1GB.
It should not be much more it should be exactly twice as much. Native code can
handle this using virtual memory. Is it that .NET is not as good at using
virtual memory? By using virtual memory the reallocation doubling swaps out to
disk, thus is able to use all of actual RAM.
A list that isn't pre-allocated, starts with a 32 byte array as back-end
store, this array is extended each time it overflows. Extending means
reserving another blob from the heap with a new size = (original size * 2.
The finale result is that a 512MB List will extend to 1GB when it expands, but
that means also that a new blob of 1GB must be found before the old block can
be copied to the new blob (List).

This should probably work...
List<byteTemp = new List<byte>(1024*1024*1024);
I am specifically testing the difficult (and common) case where one does not
know how much memory is needed in advance. The performance of this aspect of
..NET determines whether or not .NET is currently feasible for my application.
>
Willy.

Dec 22 '06 #23

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:kp********************@newsfe06.phx...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:eO**************@TK2MSFTNGP04.phx.gbl...
>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RH********************@newsfe07.phx...
>>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl.. .
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:HP********************@newsfe07.phx...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl.. .
>"Peter Olcott" <No****@SeeScreen.comwrote in message
>news:RK*******************@newsfe12.phx...
>>It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY exception
>>whenever memory allocated exceeds 256 MB. I have 1024 MB on my system so I am not
>>even out of physical RAM, much less virtual memory.
>>>
>>Are other people experiencing this same problem?
>>>
>>>
>>>
>>
>Why do you mention VS2005? This is thrown at program run-time, right? The run-time
>limitation is 2GB for a CLR object type, that means that your List can only hold at
>most 2GB, however, due heap fragmentation of a 32 bit process, the maximum size of a
>single object is limited to something from 1.2GB up to 1.8GB depending on the type of
>OS and application. A simple console application should be able to allocate a single
>List of ~1.6GB or more , Windows Forms will top at ~1.2GB for the largest List (or
>array or whatever).
>>
>Willy.
>
If you have actually tested this and thus know that it works empirically rather than
theoretically, then this must be a limitation of Visual Studio 2005 Express. I could
not get Visual Studio 2005 Express to allocate more than 256 MB without abnormally
terminating.
>
I was very pleased with its relative performance to Native Code Visual C++ 6.0. It was
something like 50% faster on every test.

No, once again this is run-time related VS does (and can't impose such restrictions) ,
try to compile and run your code from the command-line prompt if you do't trust me.
Keep in mind that when you don't pre-allocate the List, you will end with a List
doubling it's size each time it get's filled to the max. capacity, so the OOM exception
might get thrown when no free *contiguous* block of ~512MB can be found in excess of
the already allocated ~256MB.

Willy.

Willy.

It works fine for native code std::vector, yet does not work for either managed
std::vector or List<Byte. The command line version failed to compile.


There is no such thing like MANAGED std::vector,

Yes, there is just recently. This saves old C++ programmers like me alot of learning curve
switching to .NET.
For more info on the upcoming Orcas release of Visual Studio and the state of the managed
STL/CLR refer to this:
http://blogs.msdn.com/nikolad/archiv...ntegrated.aspx
and this:
http://blogs.msdn.com/nikolad/archiv...le-online.aspx

Willy.
Dec 22 '06 #24

P: n/a

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:e1**************@TK2MSFTNGP06.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:kp********************@newsfe06.phx...
>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:eO**************@TK2MSFTNGP04.phx.gbl...
>>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RH********************@newsfe07.phx...

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl. ..
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:HP********************@newsfe07.phx...
>>
>"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
>news:%2****************@TK2MSFTNGP06.phx.gbl. ..
>>"Peter Olcott" <No****@SeeScreen.comwrote in message
>>news:RK*******************@newsfe12.phx...
>>>It looks like System::Collections::Generic.List throws and
>>>OUT_OF_MEMORY exception whenever memory allocated exceeds 256 MB. I
>>>have 1024 MB on my system so I am not even out of physical RAM, much
>>>less virtual memory.
>>>>
>>>Are other people experiencing this same problem?
>>>>
>>>>
>>>>
>>>
>>Why do you mention VS2005? This is thrown at program run-time, right?
>>The run-time limitation is 2GB for a CLR object type, that means that
>>your List can only hold at most 2GB, however, due heap fragmentation of
>>a 32 bit process, the maximum size of a single object is limited to
>>something from 1.2GB up to 1.8GB depending on the type of OS and
>>application. A simple console application should be able to allocate a
>>single List of ~1.6GB or more , Windows Forms will top at ~1.2GB for the
>>largest List (or array or whatever).
>>>
>>Willy.
>>
>If you have actually tested this and thus know that it works empirically
>rather than theoretically, then this must be a limitation of Visual
>Studio 2005 Express. I could not get Visual Studio 2005 Express to
>allocate more than 256 MB without abnormally terminating.
>>
>I was very pleased with its relative performance to Native Code Visual
>C++ 6.0. It was something like 50% faster on every test.
>
No, once again this is run-time related VS does (and can't impose such
restrictions) , try to compile and run your code from the command-line
prompt if you do't trust me.
Keep in mind that when you don't pre-allocate the List, you will end with
a List doubling it's size each time it get's filled to the max. capacity,
so the OOM exception might get thrown when no free *contiguous* block of
~512MB can be found in excess of the already allocated ~256MB.
>
Willy.
>
>
>
>
>
Willy.
>
>

It works fine for native code std::vector, yet does not work for either
managed std::vector or List<Byte. The command line version failed to
compile.

There is no such thing like MANAGED std::vector,

Yes, there is just recently. This saves old C++ programmers like me alot of
learning curve switching to .NET.

No, there is not, the managed "std" like templates are currently in a "closed"
pre-beta stage and are scheduled for public release after the ORCAS release
(another wait for at least one year), and will not belong to the std
namespace, that is they will not be named std::vector.

Willy.
http://blogs.msdn.com/vcblog/archive...30/777835.aspx
It looks like you were right, yet I then wonder why the std::vector compiled?
Did it mix native code and managed code together in the same function? (I had a
std::vector, and a Generic.List in the same function).
Dec 22 '06 #25

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:s3****************@newsfe10.phx...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:OY**************@TK2MSFTNGP06.phx.gbl...
>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:Cn********************@newsfe06.phx...
>>Try this simpler case:
uint SIZE = 0x3FFFFFFF; // 1024 MB

List<uintTemp;
for (uint N = 0; N < SIZE; N++)
Temp.Add(N);

Mine now bombs out on just short of half my memory. I am guessing that it runs out of
actual RAM when it doubles the size on the next reallocation. Unlike the native code
compiler, the managed code compiler must have actual RAM, virtual memory will not work.

3FFFFFFF is 1GB!!!! More, your List hold uint's that is 4 byes per entry, that means that
in your sample you are trying to allocate 4GB!. It's obvious that this will fail.

I forgot to divide by four, make that 0xFFFFFFF; // One MB of uint
This should work on a system with 2GB of RAM.
And it works for me!
>>
And as I said in another reply, you have to pre-allocate the List, otherwise you'll need
much more free CONTIGUOUS memory than 1GB.

It should not be much more it should be exactly twice as much. Native code can handle this
using virtual memory. Is it that .NET is not as good at using virtual memory? By using
virtual memory the reallocation doubling swaps out to disk, thus is able to use all of
actual RAM.
No it can't, not for managed code nor for native code, because the 2GB space per process
must be shared between code and data, the code are the executable module(s) and all it's
dependent modules (DLL's) which are loaded when the process starts. The result is that you
don't have a contiguous area of 2GB for the heap to allocate from.
More, as I said before, the DLL's might get loaded at some addressees which fragments the
heap in such a way that the largest FREE area of CONTIGUOUS memory is much smaller than 2GB,
possibly smaller than 1GB. Memory allocation patterns may even further fragment the heap in
such a way that even trying to allocate a 1MB buffer will throw an OOM. And this all has
nothing to do with .NET, there is no such thing like a .NET process, at run-time there is no
such thing like a native or .NET process, only difference between .NET and native is that
the memory footprint is somewhat larger at process start-up because of the .NET run-time and
it's libraries, but this is less than 10MB and is just overhead taken once.

>A list that isn't pre-allocated, starts with a 32 byte array as back-end store, this
array is extended each time it overflows. Extending means reserving another blob from the
heap with a new size = (original size * 2.
The finale result is that a 512MB List will extend to 1GB when it expands, but that means
also that a new blob of 1GB must be found before the old block can be copied to the new
blob (List).

This should probably work...
List<byteTemp = new List<byte>(1024*1024*1024);

I am specifically testing the difficult (and common) case where one does not know how much
memory is needed in advance. The performance of this aspect of .NET determines whether or
not .NET is currently feasible for my application.
But you know at least whether or not you'll need 100Kb, 1MB or 500MB or maybe 1GB, isn't it?
If you think you'll need 500MB, just start pre-allocating 550 or 600 whatever and you are
done, until this get's filled completely and throws an OOM. Note that this is true for
native std::vector as well, both vector and List are "self expanding" by "copying", that
means that you should always be prepared to get OOM's when you are allocating such huge
objects in a 32 bit process.

Willy.


Dec 22 '06 #26

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message news:np*************@newsfe15.phx...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:e1**************@TK2MSFTNGP06.phx.gbl...
>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:kp********************@newsfe06.phx...
>>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:eO**************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:RH********************@newsfe07.phx...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl.. .
>"Peter Olcott" <No****@SeeScreen.comwrote in message
>news:HP********************@newsfe07.phx...
>>>
>>"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
>>news:%2****************@TK2MSFTNGP06.phx.gbl ...
>>>"Peter Olcott" <No****@SeeScreen.comwrote in message
>>>news:RK*******************@newsfe12.phx.. .
>>>>It looks like System::Collections::Generic.List throws and OUT_OF_MEMORY exception
>>>>whenever memory allocated exceeds 256 MB. I have 1024 MB on my system so I am not
>>>>even out of physical RAM, much less virtual memory.
>>>>>
>>>>Are other people experiencing this same problem?
>>>>>
>>>>>
>>>>>
>>>>
>>>Why do you mention VS2005? This is thrown at program run-time, right? The run-time
>>>limitation is 2GB for a CLR object type, that means that your List can only hold at
>>>most 2GB, however, due heap fragmentation of a 32 bit process, the maximum size of
>>>a single object is limited to something from 1.2GB up to 1.8GB depending on the
>>>type of OS and application. A simple console application should be able to allocate
>>>a single List of ~1.6GB or more , Windows Forms will top at ~1.2GB for the largest
>>>List (or array or whatever).
>>>>
>>>Willy.
>>>
>>If you have actually tested this and thus know that it works empirically rather than
>>theoretically, then this must be a limitation of Visual Studio 2005 Express. I could
>>not get Visual Studio 2005 Express to allocate more than 256 MB without abnormally
>>terminating.
>>>
>>I was very pleased with its relative performance to Native Code Visual C++ 6.0. It
>>was something like 50% faster on every test.
>>
>No, once again this is run-time related VS does (and can't impose such restrictions)
>, try to compile and run your code from the command-line prompt if you do't trust me.
>Keep in mind that when you don't pre-allocate the List, you will end with a List
>doubling it's size each time it get's filled to the max. capacity, so the OOM
>exception might get thrown when no free *contiguous* block of ~512MB can be found in
>excess of the already allocated ~256MB.
>>
>Willy.
>>
>>
>>
>>
>>
>Willy.
>>
>>
>
It works fine for native code std::vector, yet does not work for either managed
std::vector or List<Byte. The command line version failed to compile.
>
There is no such thing like MANAGED std::vector,

Yes, there is just recently. This saves old C++ programmers like me alot of learning
curve switching to .NET.

No, there is not, the managed "std" like templates are currently in a "closed" pre-beta
stage and are scheduled for public release after the ORCAS release (another wait for at
least one year), and will not belong to the std namespace, that is they will not be named
std::vector.

Willy.

http://blogs.msdn.com/vcblog/archive...30/777835.aspx
It looks like you were right, yet I then wonder why the std::vector compiled? Did it mix
native code and managed code together in the same function? (I had a std::vector, and a
Generic.List in the same function).


Yes, this is because you compiled with the clr option, this means mixed mode! the
std::vector template will compile as a native class .Try to compile in pure managed mode
/clr:safe and it will fail.

Willy.

Dec 22 '06 #27

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:K9*********************@newsfe06.phx...
>Not respective of platform, if you're really allocating memory in chunks
this big, you need to rethink your algorithm.

Would you think that 18,000 hours would be enough thought? That's how much
I have into it.
Well, if the best solution you can come up with requires allocating 1GB
chunks, and you're woking on a computer that has only 1GB of physical
memory, I would say you're in some trouble.

At the very least, if you're dealing with memory chunks of that size, you
really need to be in x64 or IA64 land working on machines with signifigantly
more memory.

--
Chris Mullins
Dec 22 '06 #28

P: n/a
That's not one 1 gig.

That's 1GB * 4 bytes per int, for a total of 4 gigs. This is never, ever,
going to run on an x86 system.

--
Chris Mullins

"Peter Olcott" <No****@SeeScreen.comwrote in message
news:Cn********************@newsfe06.phx...
Try this simpler case:
uint SIZE = 0x3FFFFFFF; // 1024 MB

List<uintTemp;
for (uint N = 0; N < SIZE; N++)
Temp.Add(N);

Mine now bombs out on just short of half my memory. I am guessing that it
runs out of actual RAM when it doubles the size on the next reallocation.
Unlike the native code compiler, the managed code compiler must have
actual RAM, virtual memory will not work.

"Chris Mullins" <cm******@yahoo.comwrote in message
news:eW**************@TK2MSFTNGP06.phx.gbl...
>Peter Olcott Wrote:

POIt looks like System::Collections::Generic.List throws and
POOUT_OF_MEMORY exception whenever memory allocated exceeds 256 MB. I
POhave 1024 MB on my system so I am not even out of physical RAM, much
POless virtual memory.

I was curious if there was a limt there, so I wrote this:

private void button1_Click(object sender, EventArgs e)
{
int bytesPerArray = 1024 * 32; // 32k per byte. No large object heap
interaction.
long totalBytes = 0;
List<byte[]bytes = new List<byte[]>();
while (true)
{
byte[] b = new byte[bytesPerArray];
bytes.Add(b);
totalBytes += bytesPerArray;
System.Diagnostics.Debug.WriteLine("Total: " +
totalBytes.ToString());
}
}

I compiled this as an x86 application, and ran it.

In my output window, the last things in there was:
Total: 1704427520
A first chance exception of type 'System.OutOfMemoryException'
occurred in WindowsApplication2.exe

This is exactly what I expected to see, as I know that each 32-bit
Windows Process gets 4GB of virual memory space, and that 4GB is split in
half, giving 2GB to user code to play with. The managed heap can
typically (if it's not fragmented) get up to 1.5-1.7 gigabytes before
issues arise.

If I compile this for x64 (or leave it as Any CPU), the limites are much
higher.

My machine does have 4GB of memory on it, but I've seen these exact same
results on machines with far less memory. (I build very big, very
scalable, applications all day long... and running into memory
limitiations in 32-bit land was a big problem I had for years).

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins



Dec 22 '06 #29

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:s3****************@newsfe10.phx...
>>Try this simpler case:
uint SIZE = 0x3FFFFFFF; // 1024 MB

List<uintTemp;
for (uint N = 0; N < SIZE; N++)
Temp.Add(N);
I forgot to divide by four, make that 0xFFFFFFF; // One MB of uint
This should work on a system with 2GB of RAM.
That's still not likley to work. Expecting to be able to grow the heap each
time, in chunks that big, is likley to fail.

When the grow happens, and you're at 1.2GB, it's going to try to allocate
2.4GB - obviously it can't get this much memory, and it goes boom.

With your code, I get the OOM at:
? SIZE
268435455
At this point, your algorithm is, I would have to say, deeply flawed. It's
not at all practical to allocate memory in chunks of that size on an x86
system. This isn't even a .Net issue - it's a "your process gets 2GB of
memory. Into there you're loading all your stuff. You get what's left after
..Net has initialized, and your DLL's are loaded and jitted.".

I can't imagine C or C++ can do this a whole lot better. In fact, from what
I remember of how their heaps works it's just luck if it works there at all.

--
Chris Mullins
Dec 22 '06 #30

P: n/a

I'm running on WinXP64, which (if I remember right) plays some weird games
with WOW for x86 applications.

I seem to remember that it moves a number of things up into "higher" memory
segments to give more space to user applications. I could be misremembering
things here though, as it's not an area I've paid much attention to.

--
Chris Mullins

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:%2****************@TK2MSFTNGP03.phx.gbl...
"Chris Mullins" <cm******@yahoo.comwrote in message
news:eL**************@TK2MSFTNGP04.phx.gbl...
>"Peter Olcott" Wrote:
>>Try and add one gig of Bytes to a List<Byteand see if it doesn't
abnormally terminate.

Well, I did just that (got to 1.7GB) and it ran fine as an x86
application. I was allocating 32KB byte chunks though.

This is a VERY different use case from allocating a single 1GB buffer. I
tried to allocate a single 1GB array:
private void button2_Click(object sender, EventArgs e)
{
List<byte[]bb = new List<byte[]>();
int GB = 1024 * 1024 * 1024;
byte[] b = new byte[GB];
bb.Add(b);
MessageBox.Show("Allocated 1 GB");
}

... and this immediatly threw an OOM when I compiled it under x86.
It did run perfectly (and instantly) when compiled and run under x64.

Not respective of platform, if you're really allocating memory in chunks
this big, you need to rethink your algorithm.

Even in x64 land, holding onto chunks in the heap this big would be
scary.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins



Windows forms needs some native dll's that are loaded by the OS loader at
an address which further fragments the process heap, and because the GC
heap allocates from the process heap, it cannot allocate a contiguous area
larger than the largest process heap fragment. The net result is that the
most simple Windows programs (native and managed) cannot allocate more
than ~1.2 GB or less , depending on OS version SP and OS language
version.

Willy.

Dec 22 '06 #31

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote
"Chris Mullins" <cm******@yahoo.comwrote in message
>>
Not respective of platform, if you're really allocating memory in chunks
this big, you need to rethink your algorithm.

Would you think that 18,000 hours would be enough thought? That's how much
I have into it.
18000 hour / 8 hours per day = 2250 days. (assuming 8 hours per day - if
you're spending more than that, it's probably detrimental).

There are about 260 working days per year (5 days per week * 52 weeks per
year).

2250 days / 260 days per year == 8.66 years.

That means you've spend 8 hours per day, 5 days per week, for 8.66 years
working on this algorithm. You're either unbelievably driven, or insane. At
this point, I could believe either. <Grin>.

I reall, really, still have to think that any algorithm that requires a
continous chunk of memory of size 1GB is flawed. I obviously don't have
enough information to make a more informed decision, but it's a HUGE red
flag. It's on par with "My application has 6125 threads running", or "string
sql = "Select * from master'"; ".

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins
Dec 22 '06 #32

P: n/a
"Chris Mullins" <cm******@yahoo.comwrote in message
news:Ob**************@TK2MSFTNGP06.phx.gbl...
>
I'm running on WinXP64, which (if I remember right) plays some weird games with WOW for
x86 applications.

I seem to remember that it moves a number of things up into "higher" memory segments to
give more space to user applications. I could be misremembering things here though, as
it's not an area I've paid much attention to.
Not really, under WOW64 the modules are still loaded as under XP 32 bit (provided there is
not relocation needed), that is they load just below the 2GB address boundary, only
difference is that under WOW64 you can effectively access to the full 4GB address space for
the program (provided it's LARGEADDRESSAWARE), nothing to share with the OS :-).

Willy.

Dec 22 '06 #33

P: n/a

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:uz**************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:s3****************@newsfe10.phx...
>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:OY**************@TK2MSFTNGP06.phx.gbl...
>>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:Cn********************@newsfe06.phx...
Try this simpler case:
uint SIZE = 0x3FFFFFFF; // 1024 MB

List<uintTemp;
for (uint N = 0; N < SIZE; N++)
Temp.Add(N);

Mine now bombs out on just short of half my memory. I am guessing that it
runs out of actual RAM when it doubles the size on the next reallocation.
Unlike the native code compiler, the managed code compiler must have actual
RAM, virtual memory will not work.

3FFFFFFF is 1GB!!!! More, your List hold uint's that is 4 byes per entry,
that means that in your sample you are trying to allocate 4GB!. It's obvious
that this will fail.

I forgot to divide by four, make that 0xFFFFFFF; // One MB of uint
This should work on a system with 2GB of RAM.

And it works for me!
>>>
And as I said in another reply, you have to pre-allocate the List, otherwise
you'll need much more free CONTIGUOUS memory than 1GB.

It should not be much more it should be exactly twice as much. Native code
can handle this using virtual memory. Is it that .NET is not as good at using
virtual memory? By using virtual memory the reallocation doubling swaps out
to disk, thus is able to use all of actual RAM.
No it can't, not for managed code nor for native code, because the 2GB space
per process must be shared between code and data, the code are the executable
module(s) and all it's dependent modules (DLL's) which are loaded when the
process starts. The result is that you don't have a contiguous area of 2GB for
the heap to allocate from.
More, as I said before, the DLL's might get loaded at some addressees which
fragments the heap in such a way that the largest FREE area of CONTIGUOUS
memory is much smaller than 2GB, possibly smaller than 1GB. Memory allocation
patterns may even further fragment the heap in such a way that even trying to
allocate a 1MB buffer will throw an OOM. And this all has nothing to do with
.NET, there is no such thing like a .NET process, at run-time there is no such
thing like a native or .NET process, only difference between .NET and native
is that the memory footprint is somewhat larger at process start-up because of
the .NET run-time and it's libraries, but this is less than 10MB and is just
overhead taken once.

Here are the final results:
Visual C++ 6.0 native code allocated a std::vector 50% larger than the largest
Generic.List that the .NET runtime could handle, and took about 11-fold (1100%)
longer to do this. This would tend to indicate extensive use of virtual memory,
especially when this next benchmark is considered.

Generic.List was only 65% faster than native code std::vector when the amount of
memory allocated was about 1/2 of total system memory. So it looks like the .NET
run-time achieves better performance at the expense of not using the virtual
memory system.
>
>>A list that isn't pre-allocated, starts with a 32 byte array as back-end
store, this array is extended each time it overflows. Extending means
reserving another blob from the heap with a new size = (original size * 2.
The finale result is that a 512MB List will extend to 1GB when it expands,
but that means also that a new blob of 1GB must be found before the old
block can be copied to the new blob (List).

This should probably work...
List<byteTemp = new List<byte>(1024*1024*1024);

I am specifically testing the difficult (and common) case where one does not
know how much memory is needed in advance. The performance of this aspect of
.NET determines whether or not .NET is currently feasible for my application.

But you know at least whether or not you'll need 100Kb, 1MB or 500MB or maybe
1GB, isn't it? If you think you'll need 500MB, just start pre-allocating 550
or 600 whatever and you are done, until this get's filled completely and
throws an OOM. Note that this is true for native std::vector as well, both
vector and List are "self expanding" by "copying", that means that you should
always be prepared to get OOM's when you are allocating such huge objects in a
32 bit process.

Willy.


Dec 22 '06 #34

P: n/a

"Chris Mullins" <cm******@yahoo.comwrote in message
news:OP**************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:K9*********************@newsfe06.phx...
>>Not respective of platform, if you're really allocating memory in chunks
this big, you need to rethink your algorithm.

Would you think that 18,000 hours would be enough thought? That's how much I
have into it.

Well, if the best solution you can come up with requires allocating 1GB
chunks, and you're woking on a computer that has only 1GB of physical memory,
I would say you're in some trouble.

At the very least, if you're dealing with memory chunks of that size, you
really need to be in x64 or IA64 land working on machines with signifigantly
more memory.
It is only in very rare cases that I ever need nearly 1 GB, or more. At least
90% of the time 200 MB should be plenty. When I first design this system it had
an 8 MB ceiling. Now that RAM is much cheaper, I added much greater
capabilities.
>
--
Chris Mullins

Dec 22 '06 #35

P: n/a

"Chris Mullins" <cm******@yahoo.comwrote in message
news:uM**************@TK2MSFTNGP03.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote
>"Chris Mullins" <cm******@yahoo.comwrote in message
>>>
Not respective of platform, if you're really allocating memory in chunks
this big, you need to rethink your algorithm.

Would you think that 18,000 hours would be enough thought? That's how much I
have into it.

18000 hour / 8 hours per day = 2250 days. (assuming 8 hours per day - if
you're spending more than that, it's probably detrimental).

There are about 260 working days per year (5 days per week * 52 weeks per
year).

2250 days / 260 days per year == 8.66 years.

That means you've spend 8 hours per day, 5 days per week, for 8.66 years
working on this algorithm. You're either unbelievably driven, or insane. At
this point, I could believe either. <Grin>.
Here is what I have spent 18,000 hours on since 1999:
www.SeeScreen.com
This technology can save business as much as billions of dollars every year in
reduced computer user labor costs. I spent about 1,000 hours (3.5 months of 12
hour days) trying to design around claim 16 of my patent and discovered that
there are no good alternatives to this technology.
I reall, really, still have to think that any algorithm that requires a
continous chunk of memory of size 1GB is flawed. I obviously don't have enough
information to make a more informed decision, but it's a HUGE red flag. It's
on par with "My application has 6125 threads running", or "string sql =
"Select * from master'"; ".

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins


Dec 22 '06 #36

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:QA****************@newsfe14.phx...
>
Generic.List was only 65% faster than native code std::vector when the
amount of memory allocated was about 1/2 of total system memory. So it
looks like the .NET run-time achieves better performance at the expense of
not using the virtual memory system.
You really need to understand how this stuff works. .Net does use virtual
memory, just like every Win32 application. Using / Not Using Virtual memory
isn't the issue here.

I know I've said it, and Willy has said it, but you're running into
fragmentation issues.

Regardless, the most memory you're EVER going to be able to address on an
x86 system is 2Gigs. Of this 2 gigs, that includes all your code, libraries
you load, whatever initialization your runtime does, etc.

Trying to allocate 1GB of this as a single fragment is NEVER going to work
in a reliable way. It's not a .Net thing, it's an x86 thing.

--
Chris Mullins
Dec 22 '06 #37

P: n/a

"Chris Mullins" <cm******@yahoo.comwrote in message
news:ey*************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:QA****************@newsfe14.phx...
>>
Generic.List was only 65% faster than native code std::vector when the amount
of memory allocated was about 1/2 of total system memory. So it looks like
the .NET run-time achieves better performance at the expense of not using the
virtual memory system.

You really need to understand how this stuff works. .Net does use virtual
memory, just like every Win32 application. Using / Not Using Virtual memory
isn't the issue here.

I know I've said it, and Willy has said it, but you're running into
fragmentation issues.
There is no fragmentation, all of the memory allocation in both programs is in
an identical tight loop. Since the native code can allocate 50% more memory than
the .NET code, and the .NET code is essentially identical to the native code,
the difference must be in the .NET run-time.
>
Regardless, the most memory you're EVER going to be able to address on an x86
system is 2Gigs. Of this 2 gigs, that includes all your code, libraries you
load, whatever initialization your runtime does, etc.
Native code can't address the whole 4GB address space? In any case 2 GB should
be plenty until the 64-bit memory architecture becomes the norm.
>
Trying to allocate 1GB of this as a single fragment is NEVER going to work in
a reliable way. It's not a .Net thing, it's an x86 thing.

--
Chris Mullins

Dec 23 '06 #38

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote
Here is what I have spent 18,000 hours on since 1999:
www.SeeScreen.com
I beat ya too it. :)

I looked through that earlier, when I was trying to figure out what on earth
you were needing to allocate 1GB of memory for.

As an aside, from one business owner to another, you really need to focus on
the message there. I went through quite a bit of the site, and wasn't clear
on how it could save me money. I own a computer software company (and act
[most of the time] as Chief Architect) , and we do LOTS of testing for our
software. In that sends, I'm pretty close to the ideal customer. I realize
it helps with testing, and allows testing to be easier, but in terms of what
points of pain is it addressing, I really don't know.

I still don't see the answer to "I need 1 GB in a single array.". There's
gotta be a better algorithm you can use - or on problems that really need
it, force people to use x64.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins
Dec 23 '06 #39

P: n/a

"Chris Mullins" <cm******@yahoo.comwrote in message
news:%2****************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote
>Here is what I have spent 18,000 hours on since 1999:
www.SeeScreen.com

I beat ya too it. :)

I looked through that earlier, when I was trying to figure out what on earth
you were needing to allocate 1GB of memory for.

As an aside, from one business owner to another, you really need to focus on
the message there. I went through quite a bit of the site, and wasn't clear on
how it could save me money. I own a computer software company (and act [most
of the time] as Chief Architect) , and we do LOTS of testing for our software.
In that sends, I'm pretty close to the ideal customer. I realize it helps with
testing, and allows testing to be easier, but in terms of what points of pain
is it addressing, I really don't know.
Although I have a BSBA (business) degree I have spent my whole professional
career developing software. It took me many hundreds of hours to get the words
as clear and convincing as they are now. It will probably take an expert writer
many more hundreds of hours to get the words clear and convincing enough.

My current plan is to offer a combined GUI Scripting Language Mouse/Keyboard
macro recorder that can be used to place custom optimized user interfaces on top
of existing systems.
In addition to this product custom development using this product will be
provided as a service.

If it only saves 1/3 of the 75 million U.S. business computer users 5 minutes a
day, and they only make $10.00 an hour, this is worth billions of dollars per
year. Preliminary market studies indicate that saving 1/3 of all business
computer users an average of at least five minute a day is a reasonable
expectation.

My next big hurdle is to find a marketing partner with a little bit of capital.

>
I still don't see the answer to "I need 1 GB in a single array.". There's
gotta be a better algorithm you can use - or on problems that really need it,
force people to use x64.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins

Dec 23 '06 #40

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:QA****************@newsfe14.phx...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:uz**************@TK2MSFTNGP04.phx.gbl...
>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:s3****************@newsfe10.phx...
>>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:OY**************@TK2MSFTNGP06.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:Cn********************@newsfe06.phx...
Try this simpler case:
uint SIZE = 0x3FFFFFFF; // 1024 MB
>
List<uintTemp;
for (uint N = 0; N < SIZE; N++)
Temp.Add(N);
>
Mine now bombs out on just short of half my memory. I am guessing that it runs out of
actual RAM when it doubles the size on the next reallocation. Unlike the native code
compiler, the managed code compiler must have actual RAM, virtual memory will not
work.

3FFFFFFF is 1GB!!!! More, your List hold uint's that is 4 byes per entry, that means
that in your sample you are trying to allocate 4GB!. It's obvious that this will fail.

I forgot to divide by four, make that 0xFFFFFFF; // One MB of uint
This should work on a system with 2GB of RAM.

And it works for me!
>>>>
And as I said in another reply, you have to pre-allocate the List, otherwise you'll
need much more free CONTIGUOUS memory than 1GB.

It should not be much more it should be exactly twice as much. Native code can handle
this using virtual memory. Is it that .NET is not as good at using virtual memory? By
using virtual memory the reallocation doubling swaps out to disk, thus is able to use
all of actual RAM.
No it can't, not for managed code nor for native code, because the 2GB space per process
must be shared between code and data, the code are the executable module(s) and all it's
dependent modules (DLL's) which are loaded when the process starts. The result is that
you don't have a contiguous area of 2GB for the heap to allocate from.
More, as I said before, the DLL's might get loaded at some addressees which fragments the
heap in such a way that the largest FREE area of CONTIGUOUS memory is much smaller than
2GB, possibly smaller than 1GB. Memory allocation patterns may even further fragment the
heap in such a way that even trying to allocate a 1MB buffer will throw an OOM. And this
all has nothing to do with .NET, there is no such thing like a .NET process, at run-time
there is no such thing like a native or .NET process, only difference between .NET and
native is that the memory footprint is somewhat larger at process start-up because of the
.NET run-time and it's libraries, but this is less than 10MB and is just overhead taken
once.


Here are the final results:
Visual C++ 6.0 native code allocated a std::vector 50% larger than the largest
Generic.List that the .NET runtime could handle, and took about 11-fold (1100%) longer to
do this. This would tend to indicate extensive use of virtual memory, especially when this
next benchmark is considered.

Generic.List was only 65% faster than native code std::vector when the amount of memory
allocated was about 1/2 of total system memory. So it looks like the .NET run-time
achieves better performance at the expense of not using the virtual memory system.

NET is not kind of an alian, it's just a thin layer in top of Win32 it uses the same system
services as native code compiled with whatever compiler you can use on Windows, the CLR and
GC re allocating memory from the heap (that is from Virtual memory)through the same calls as
the C runtime library, and do you know why? Because the CLR uses the same C runtime and
there is no other way to allocate memory in Windows.
As we told you before, the process heap is fragmented from the start of the program, the way
it's fragmented is determined by the modules loaded into the process space, so, there might
be a difference between different type of applications. Native C++ console applications
don't have to load the CLR runtime and some of the FCL libraries, that means that the heap
is less fragmented as it's the case with a C# console program, but a real world C++ program
also needs to load libraries, and these will fragment the heap just like in the case of
..NET.

As I said (and others too) each time the List (or vector) overflows it must be extended,
please refer to my prevous post to know exactly what this is all about. To prevent this you
have to pre-allocate the List or vector.

Running following code won't throw an OOM when run on 32 bit windows XP.

static void Main()
{
List<bytebList;= new List<byte>(1600000000); // 1.600.000.000 bytes
for(int i = 0; i < bList.Capacity; i++)
bList[i] = 12;
}

while this will throw...

bList = new List<byte>();
for(int i = 0; i < 600000000 ; i++) // 600.000.000 bytes
bList.Add(12);

but 512.000.000 bytes will work...
Now back to C++, this will throw.

#include <vector>
#include <iostream>

static void main()
{
std::vector<unsigned char*bList = new std::vector<unsigned char>;
try {
for(int i = 0; i < 700000000; i++) // 700.000.000 bytes
bList->push_back(12);
}
catch( char * str ) {
std::cout << "Exception raised: " << str << '\n';
}
}

while 640.000.000 bytes may work.

But what's the difference, 100MB? point is that you can't allocate the full 2GB and you need
to pre-allocate whenever you are allocating that huge objects (512MB) on 32 bit systems,
native code or managed code it doesn't matter.
And don't let me get started about the performance implication by not doing so!!!!!

Willy.

Dec 23 '06 #41

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote
>
There is no fragmentation, all of the memory allocation in both programs
is in an identical tight loop. Since the native code can allocate 50% more
memory than the .NET code, and the .NET code is essentially identical to
the native code, the difference must be in the .NET run-time.
That doesn't prove anything, as Willy has pointed out again and again.
>Regardless, the most memory you're EVER going to be able to address on an
x86 system is 2Gigs. Of this 2 gigs, that includes all your code,
libraries you load, whatever initialization your runtime does, etc.

Native code can't address the whole 4GB address space? In any case 2 GB
should be plenty until the 64-bit memory architecture becomes the norm.
No, it can't.

It gets the same 4GB address space as every other win32 application, and
it's limited to the lower half of that, just like every other Win32
application. This means, at most, you have 2GB to play with. For all Win32
apps.
>Trying to allocate 1GB of this as a single fragment is NEVER going to
work in a reliable way. It's not a .Net thing, it's an x86 thing.
I agree with whomever said that. Oh, wait. That was me! :)

Even if it works on your computer today in unmanaged C++, there will be
computers on which it won't work, and times on your computer on which it
won't work. You're really just being stubborn at this point.

Enough people (and some, like Willy and Jon, who are damn smart), have
expressed this that it's time to open your eye's and agree they just might
know what they're talking about.

--
Chris Mullins
Dec 23 '06 #42

P: n/a
"Peter Olcott" <No****@SeeScreen.comwrote
My current plan is to offer a combined GUI Scripting Language
Mouse/Keyboard macro recorder that can be used to place custom optimized
user interfaces on top of existing systems.
In addition to this product custom development using this product will be
provided as a service.
I'm a software engineer. A darn good one. I know what GUI's are. I know a
dozen scripting languages. I've written mouse and keyboard drivers in
assembly. I've written countless macros. I spend tons of time optimizing
things.

I don't understand what it is you just said. I'm not just being obstinate
and stubborn either.

It's time to sit down with an A tier marketing guy and get your message
straight.

[Snip]
.... at this point, I wish you the best of luck. I'm done responding to you,
as you're not really willing to listen to what I or anyone else has said to
you and I'm just wasting my time.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins
Dec 23 '06 #43

P: n/a

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:uq**************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:QA****************@newsfe14.phx...
>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:uz**************@TK2MSFTNGP04.phx.gbl...
>>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:s3****************@newsfe10.phx...

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:OY**************@TK2MSFTNGP06.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:Cn********************@newsfe06.phx...
>Try this simpler case:
>uint SIZE = 0x3FFFFFFF; // 1024 MB
>>
>List<uintTemp;
>for (uint N = 0; N < SIZE; N++)
> Temp.Add(N);
>>
>Mine now bombs out on just short of half my memory. I am guessing that it
>runs out of actual RAM when it doubles the size on the next reallocation.
>Unlike the native code compiler, the managed code compiler must have
>actual RAM, virtual memory will not work.
>
3FFFFFFF is 1GB!!!! More, your List hold uint's that is 4 byes per entry,
that means that in your sample you are trying to allocate 4GB!. It's
obvious that this will fail.

I forgot to divide by four, make that 0xFFFFFFF; // One MB of uint
This should work on a system with 2GB of RAM.
And it works for me!

>
And as I said in another reply, you have to pre-allocate the List,
otherwise you'll need much more free CONTIGUOUS memory than 1GB.

It should not be much more it should be exactly twice as much. Native code
can handle this using virtual memory. Is it that .NET is not as good at
using virtual memory? By using virtual memory the reallocation doubling
swaps out to disk, thus is able to use all of actual RAM.

No it can't, not for managed code nor for native code, because the 2GB space
per process must be shared between code and data, the code are the
executable module(s) and all it's dependent modules (DLL's) which are loaded
when the process starts. The result is that you don't have a contiguous area
of 2GB for the heap to allocate from.
More, as I said before, the DLL's might get loaded at some addressees which
fragments the heap in such a way that the largest FREE area of CONTIGUOUS
memory is much smaller than 2GB, possibly smaller than 1GB. Memory
allocation patterns may even further fragment the heap in such a way that
even trying to allocate a 1MB buffer will throw an OOM. And this all has
nothing to do with .NET, there is no such thing like a .NET process, at
run-time there is no such thing like a native or .NET process, only
difference between .NET and native is that the memory footprint is somewhat
larger at process start-up because of the .NET run-time and it's libraries,
but this is less than 10MB and is just overhead taken once.


Here are the final results:
Visual C++ 6.0 native code allocated a std::vector 50% larger than the
largest Generic.List that the .NET runtime could handle, and took about
11-fold (1100%) longer to do this. This would tend to indicate extensive use
of virtual memory, especially when this next benchmark is considered.

Generic.List was only 65% faster than native code std::vector when the amount
of memory allocated was about 1/2 of total system memory. So it looks like
the .NET run-time achieves better performance at the expense of not using the
virtual memory system.


NET is not kind of an alian, it's just a thin layer in top of Win32 it uses
the same system services as native code compiled with whatever compiler you
can use on Windows, the CLR and GC re allocating memory from the heap (that is
from Virtual memory)through the same calls as the C runtime library, and do
you know why? Because the CLR uses the same C runtime and there is no other
way to allocate memory in Windows.
Then what explains the reason why two otherwise identical programs both run on
the same computer and OS (XP Pro) and one can allocate 50% more memory than the
other??? You claim that the systems are the same, yet empirical fact shows that
they have different results. One can not get different result from identical
systems. The only essential difference is the former is native code, and the
latter is .NET.

By the way, some experts have told me, that some aspects of the .NET
architecture are not merely wrappers around the pre-existing architecture but
completely re-written subsystems.
As we told you before, the process heap is fragmented from the start of the
program, the way it's fragmented is determined by the modules loaded into the
process space, so, there might be a difference between different type of
applications. Native C++ console applications don't have to load the CLR
runtime and some of the FCL libraries, that means that the heap is less
fragmented as it's the case with a C# console program, but a real world C++
program also needs to load libraries, and these will fragment the heap just
like in the case of .NET.

As I said (and others too) each time the List (or vector) overflows it must be
extended, please refer to my prevous post to know exactly what this is all
about. To prevent this you have to pre-allocate the List or vector.

Running following code won't throw an OOM when run on 32 bit windows XP.

static void Main()
{
List<bytebList;= new List<byte>(1600000000); // 1.600.000.000 bytes
for(int i = 0; i < bList.Capacity; i++)
bList[i] = 12;
}

while this will throw...

bList = new List<byte>();
for(int i = 0; i < 600000000 ; i++) // 600.000.000 bytes
bList.Add(12);

but 512.000.000 bytes will work...
Now back to C++, this will throw.

#include <vector>
#include <iostream>

static void main()
{
std::vector<unsigned char*bList = new std::vector<unsigned char>;
try {
for(int i = 0; i < 700000000; i++) // 700.000.000 bytes
bList->push_back(12);
}
catch( char * str ) {
std::cout << "Exception raised: " << str << '\n';
}
}

while 640.000.000 bytes may work.

But what's the difference, 100MB? point is that you can't allocate the full
2GB and you need to pre-allocate whenever you are allocating that huge objects
(512MB) on 32 bit systems, native code or managed code it doesn't matter.
And don't let me get started about the performance implication by not doing
so!!!!!

Willy.

Dec 23 '06 #44

P: n/a

"Chris Mullins" <cm******@yahoo.comwrote in message
news:ug**************@TK2MSFTNGP03.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote
>>
There is no fragmentation, all of the memory allocation in both programs is
in an identical tight loop. Since the native code can allocate 50% more
memory than the .NET code, and the .NET code is essentially identical to the
native code, the difference must be in the .NET run-time.

That doesn't prove anything, as Willy has pointed out again and again.
>>Regardless, the most memory you're EVER going to be able to address on an
x86 system is 2Gigs. Of this 2 gigs, that includes all your code, libraries
you load, whatever initialization your runtime does, etc.

Native code can't address the whole 4GB address space? In any case 2 GB
should be plenty until the 64-bit memory architecture becomes the norm.

No, it can't.

It gets the same 4GB address space as every other win32 application, and it's
limited to the lower half of that, just like every other Win32 application.
This means, at most, you have 2GB to play with. For all Win32 apps.
>>Trying to allocate 1GB of this as a single fragment is NEVER going to work
in a reliable way. It's not a .Net thing, it's an x86 thing.

I agree with whomever said that. Oh, wait. That was me! :)

Even if it works on your computer today in unmanaged C++, there will be
computers on which it won't work, and times on your computer on which it won't
work. You're really just being stubborn at this point.

Enough people (and some, like Willy and Jon, who are damn smart), have
expressed this that it's time to open your eye's and agree they just might
know what they're talking about.
And some of them are attempt to put forth the point the identical systems can
have consistently different results. I never accept any analytical impossibility
no matter who the source.
>
--
Chris Mullins

Dec 23 '06 #45

P: n/a

"Chris Mullins" <cm******@yahoo.comwrote in message
news:%2****************@TK2MSFTNGP03.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote
>My current plan is to offer a combined GUI Scripting Language Mouse/Keyboard
macro recorder that can be used to place custom optimized user interfaces on
top of existing systems.
In addition to this product custom development using this product will be
provided as a service.

I'm a software engineer. A darn good one. I know what GUI's are. I know a
dozen scripting languages. I've written mouse and keyboard drivers in
assembly. I've written countless macros. I spend tons of time optimizing
things.

I don't understand what it is you just said. I'm not just being obstinate and
stubborn either.
No other technology can possibly produce a mouse macro recorder or GUI scripting
language that can always see where it needs to click the mouse.

If you know what GUI scripting languages are
http://en.wikipedia.org/wiki/Scripti...#GUI_Scripting
(A entirely different thing than scripting languages in general),

and you know what a mouse recorder is,
http://www.google.com/search?hl=en&l...mouse+recorder
you should be able to get my point.

You will not be able to get my point until you know these exact terms as they
are precisely defined.
It's time to sit down with an A tier marketing guy and get your message
straight.

[Snip]
... at this point, I wish you the best of luck. I'm done responding to you, as
you're not really willing to listen to what I or anyone else has said to you
and I'm just wasting my time.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins


Dec 23 '06 #46

P: n/a

"Chris Mullins" <cm******@yahoo.comwrote in message
news:%2****************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote
>Here is what I have spent 18,000 hours on since 1999:
www.SeeScreen.com

I beat ya too it. :)

I looked through that earlier, when I was trying to figure out what on earth
you were needing to allocate 1GB of memory for.

As an aside, from one business owner to another, you really need to focus on
the message there. I went through quite a bit of the site, and wasn't clear on
how it could save me money. I own a computer software company (and act [most
of the time] as Chief Architect) , and we do LOTS of testing for our software.
In that sends, I'm pretty close to the ideal customer. I realize it helps with
testing, and allows testing to be easier, but in terms of what points of pain
is it addressing, I really don't know.
As far as automated GUI testing goes, the process for this is very well defined,
especially for regression testing. There is much published material on automated
testing.
http://en.wikipedia.org/wiki/Test_automation
>
I still don't see the answer to "I need 1 GB in a single array.". There's
gotta be a better algorithm you can use - or on problems that really need it,
force people to use x64.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins

Dec 23 '06 #47

P: n/a

"Chris Mullins" <cm******@yahoo.comwrote in message
news:%2****************@TK2MSFTNGP03.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote
>My current plan is to offer a combined GUI Scripting Language Mouse/Keyboard
macro recorder that can be used to place custom optimized user interfaces on
top of existing systems.
In addition to this product custom development using this product will be
provided as a service.

I'm a software engineer. A darn good one. I know what GUI's are. I know a
dozen scripting languages. I've written mouse and keyboard drivers in
assembly. I've written countless macros. I spend tons of time optimizing
things.

I don't understand what it is you just said. I'm not just being obstinate and
stubborn either.
The problem was my fault, I did not answer your prior post properly. You
specifically asked about testing, and my answer did not address this point,
sorry.
>
It's time to sit down with an A tier marketing guy and get your message
straight.

[Snip]
... at this point, I wish you the best of luck. I'm done responding to you, as
you're not really willing to listen to what I or anyone else has said to you
and I'm just wasting my time.

--
Chris Mullins, MCSD.NET, MCPD:Enterprise
http://www.coversant.net/blogs/cmullins


Dec 23 '06 #48

P: n/a

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:uq**************@TK2MSFTNGP04.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:QA****************@newsfe14.phx...
>>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:uz**************@TK2MSFTNGP04.phx.gbl...
>>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:s3****************@newsfe10.phx...

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:OY**************@TK2MSFTNGP06.phx.gbl...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:Cn********************@newsfe06.phx...
>Try this simpler case:
>uint SIZE = 0x3FFFFFFF; // 1024 MB
>>
>List<uintTemp;
>for (uint N = 0; N < SIZE; N++)
> Temp.Add(N);
>>
>Mine now bombs out on just short of half my memory. I am guessing that it
>runs out of actual RAM when it doubles the size on the next reallocation.
>Unlike the native code compiler, the managed code compiler must have
>actual RAM, virtual memory will not work.
>
3FFFFFFF is 1GB!!!! More, your List hold uint's that is 4 byes per entry,
that means that in your sample you are trying to allocate 4GB!. It's
obvious that this will fail.

I forgot to divide by four, make that 0xFFFFFFF; // One MB of uint
This should work on a system with 2GB of RAM.
And it works for me!

>
And as I said in another reply, you have to pre-allocate the List,
otherwise you'll need much more free CONTIGUOUS memory than 1GB.

It should not be much more it should be exactly twice as much. Native code
can handle this using virtual memory. Is it that .NET is not as good at
using virtual memory? By using virtual memory the reallocation doubling
swaps out to disk, thus is able to use all of actual RAM.

No it can't, not for managed code nor for native code, because the 2GB space
per process must be shared between code and data, the code are the
executable module(s) and all it's dependent modules (DLL's) which are loaded
when the process starts. The result is that you don't have a contiguous area
of 2GB for the heap to allocate from.
More, as I said before, the DLL's might get loaded at some addressees which
fragments the heap in such a way that the largest FREE area of CONTIGUOUS
memory is much smaller than 2GB, possibly smaller than 1GB. Memory
allocation patterns may even further fragment the heap in such a way that
even trying to allocate a 1MB buffer will throw an OOM. And this all has
nothing to do with .NET, there is no such thing like a .NET process, at
run-time there is no such thing like a native or .NET process, only
difference between .NET and native is that the memory footprint is somewhat
larger at process start-up because of the .NET run-time and it's libraries,
but this is less than 10MB and is just overhead taken once.


Here are the final results:
Visual C++ 6.0 native code allocated a std::vector 50% larger than the
largest Generic.List that the .NET runtime could handle, and took about
11-fold (1100%) longer to do this. This would tend to indicate extensive use
of virtual memory, especially when this next benchmark is considered.

Generic.List was only 65% faster than native code std::vector when the amount
of memory allocated was about 1/2 of total system memory. So it looks like
the .NET run-time achieves better performance at the expense of not using the
virtual memory system.


NET is not kind of an alian, it's just a thin layer in top of Win32 it uses
the same system services as native code compiled with whatever compiler you
can use on Windows, the CLR and GC re allocating memory from the heap (that is
from Virtual memory)through the same calls as
I think that I just found out the reason for the difference. Visual C++ 6.0
std::vector has a memory growth factor of 1.5. Whereas Generic.List has been
reported to have a memory growth factor of 2.0. The next reallocation of
std::vector will fit into contiguous free RAM because it is only 50% larger.

The next allocation of Generic.List will not because it is twice as big. Both of
the prior allocations fit into actual RAM without the need of virtual memory.
Although the next allocation of std::vector will fit into free RAM, it must
write the current data to virtual memory to make room.

What it boils down to is the single difference of the memory growth factor can
entirely account all of the differences in the results.
the C runtime library, and do you know why? Because the CLR uses the same C
runtime and there is no other way to allocate memory in Windows.
As we told you before, the process heap is fragmented from the start of the
program, the way it's fragmented is determined by the modules loaded into the
process space, so, there might be a difference between different type of
applications. Native C++ console applications don't have to load the CLR
runtime and some of the FCL libraries, that means that the heap is less
fragmented as it's the case with a C# console program, but a real world C++
program also needs to load libraries, and these will fragment the heap just
like in the case of .NET.

As I said (and others too) each time the List (or vector) overflows it must be
extended, please refer to my prevous post to know exactly what this is all
about. To prevent this you have to pre-allocate the List or vector.

Running following code won't throw an OOM when run on 32 bit windows XP.

static void Main()
{
List<bytebList;= new List<byte>(1600000000); // 1.600.000.000 bytes
for(int i = 0; i < bList.Capacity; i++)
bList[i] = 12;
}

while this will throw...

bList = new List<byte>();
for(int i = 0; i < 600000000 ; i++) // 600.000.000 bytes
bList.Add(12);

but 512.000.000 bytes will work...
Now back to C++, this will throw.

#include <vector>
#include <iostream>

static void main()
{
std::vector<unsigned char*bList = new std::vector<unsigned char>;
try {
for(int i = 0; i < 700000000; i++) // 700.000.000 bytes
bList->push_back(12);
}
catch( char * str ) {
std::cout << "Exception raised: " << str << '\n';
}
}

while 640.000.000 bytes may work.

But what's the difference, 100MB? point is that you can't allocate the full
2GB and you need to pre-allocate whenever you are allocating that huge objects
(512MB) on 32 bit systems, native code or managed code it doesn't matter.
And don't let me get started about the performance implication by not doing
so!!!!!

Willy.

Dec 23 '06 #49

P: n/a
Trimmed...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:rh*********************@newsfe07.phx...
I think that I just found out the reason for the difference. Visual C++ 6.0 std::vector
has a memory growth factor of 1.5. Whereas Generic.List has been reported to have a memory
growth factor of 2.0. The next reallocation of std::vector will fit into contiguous free
RAM because it is only 50% larger.

The next allocation of Generic.List will not because it is twice as big. Both of the prior
allocations fit into actual RAM without the need of virtual memory. Although the next
allocation of std::vector will fit into free RAM, it must write the current data to
virtual memory to make room.
You are getting close, the growth factor in C++ is implementation dependent and is not
standard defined, but it may (and does) vary from implementation to implementation.
The growth factor for generic containers in .NET is not a fixed 2.0 factor but varies
depending the actual capacity of the container, that is it starts with factor 2 for small
containers and once it has reached a threshold it drops to 1.5. A growth factor of 2 is
advantageous, in terms of performance (less GC pressure), for small containers that grow
quickly, but is disadvantageous for large containers in terms of memory consumption.
But there is more, containers that are >85KB are allocated from the so called Large Object
Heap (LHO), and this one isn't compacted by the GC after a collection run, simply because
it's too expensive to move these large objects around in memory, that means that you can end
with a fragmented LOH heap if you don't care about your allocation scheme.
Think what's happening in this scenarion:
thread T1 allocates a List<int>() say L1 and starts filling the List with 1000000 int's
at the same time T1 fills L1, thread T2 allocates a List<double>() say l2and starts filling
this list with 100000 doubles
this will result in a highly fragmented LOH especially when one of the containers are long
living. In this case you may even get OOM exceptions when allocating much smaller objects
than the total free heap space. In such scenario the only solution is to start with
pre-allocated containers, say 250000 for L1 and 25000 for L2 to reduce the number of
fragments if you don't know the exact "end size", but much better is to allocate the end
size. Anyway, native or managed you must be prepared to receive OOM exceptions but more
importantly - you should try to prevent OOM when allocating large objects, pre-allocating
is such a technique and as a bonus it helps with performance.
The native heap is never compacted by the C++ allocator, that means that native applications
are more sensible to fragmentation than managed application, that's one of the many reasons
the GC has been invented.

Willy.


Dec 23 '06 #50

81 Replies

This discussion thread is closed

Replies have been disabled for this discussion.