473,327 Members | 2,055 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,327 software developers and data experts.

Possible to set cpython heap size?

I have an application that scans and processes a bunch of text files.
The content I'm pulling out and holding in memory is at least 200MB.

I'd love to be able to tell the CPython virtual machine that I need a
heap of, say 300MB up front rather than have it grow as needed. I've
had a scan through the archives of comp.lang.python and the python
docs but cannot find a way to do this. Is this possible to configure
the PVM this way?

Much appreciated,
Andy
--

Feb 22 '07 #1
11 5178
Andy Watson wrote:
I have an application that scans and processes a bunch of text files.
The content I'm pulling out and holding in memory is at least 200MB.

I'd love to be able to tell the CPython virtual machine that I need a
heap of, say 300MB up front rather than have it grow as needed. I've
had a scan through the archives of comp.lang.python and the python
docs but cannot find a way to do this. Is this possible to configure
the PVM this way?
Why do you want that? And no, it is not possible. And to be honest: I have
no idea why e.g. the JVM allows for this.

Diez
Feb 22 '07 #2
Why do you want that? And no, it is not possible. And to be honest:
I have
no idea why e.g. the JVM allows for this.

Diez
The reason why is simply that I know roughly how much memory I'm going
to need, and cpython seems to be taking a fair amount of time
extending its heap as I read in content incrementally.

Ta,
Andy
--

Feb 22 '07 #3
Andy Watson wrote:
Why do you want that? And no, it is not possible. And to be honest:
I have
>no idea why e.g. the JVM allows for this.

The reason why is simply that I know roughly how much memory I'm going
to need, and cpython seems to be taking a fair amount of time
extending its heap as I read in content incrementally.
I'm not an expert in python malloc schemes, I know that _some_ things are
heavily optimized, but I'm not aware that it does some clever
self-management of heap in the general case. Which would be complicated in
the presence of arbitrary C extensions anyway.
However, I'm having doubts that your observation is correct. A simple

python -m timeit -n 1 -r 1 "range(50000000)"
1 loops, best of 1: 2.38 sec per loop

will create a python-process of half a gig ram - for a split-second - and I
don't consider 2.38 seconds a fair amount of time for heap allocation.

When I used a 4 times larger argument, my machine began swapping. THEN
things became ugly - but I don't see how preallocation will help there...

Diez
Feb 22 '07 #4
Andy Watson wrote:
Why do you want that? And no, it is not possible. And to be honest:
I have
>no idea why e.g. the JVM allows for this.

Diez

The reason why is simply that I know roughly how much memory I'm going
to need, and cpython seems to be taking a fair amount of time
^^^^^
extending its heap as I read in content incrementally.
First make sure this is really the case.
It may be that you are just using an inefficient algorithm.
In my experience allocating extra heap memory is hardly ever
noticeable. Unless your system is out of physical RAM and has
to swap.

--Irmen
Feb 22 '07 #5
On 22 Feb 2007 09:52:49 -0800, Andy Watson <al********@gmail.comwrote:
Why do you want that? And no, it is not possible. And to be honest:
I have
no idea why e.g. the JVM allows for this.

Diez

The reason why is simply that I know roughly how much memory I'm going
to need, and cpython seems to be taking a fair amount of time
extending its heap as I read in content incrementally.
To my knowledge, no modern OS actually commits any memory at all to a
process until it is written to. Pre-extending the heap would either a)
do nothing, because it'd be essentially a noop, or b) would take at
least long as doing it incrementally (because Python would need to
fill up all that space with objects), without giving you any actual
performance gain when you fill the object space "for real".

In Java, as I understand it, having a fixed size heap allows some
optimizations in the garbage collector. Pythons GC model is different
and, as far as I know, is unlikely to benefit from this.
Feb 22 '07 #6
Andy Watson kirjoitti:
I have an application that scans and processes a bunch of text files.
The content I'm pulling out and holding in memory is at least 200MB.

I'd love to be able to tell the CPython virtual machine that I need a
heap of, say 300MB up front rather than have it grow as needed. I've
had a scan through the archives of comp.lang.python and the python
docs but cannot find a way to do this. Is this possible to configure
the PVM this way?

Much appreciated,
Andy
--
Others have already suggested swap as a possible cause of slowness. I've
been playing with my portable (dual Intel T2300 @ 1.66 GHz; 1 GB of mem
; Win XP ; Python Scripter IDE)
using the following code:

#=======================
import datetime

'''
# Create 10 files with sizes 1MB, ..., 10MB
for i in range(1,11):
print 'Writing: ' + 'Bytes_' + str(i*1000000)
f = open('Bytes_' + str(i*1000000), 'w')
f.write(str(i-1)*i*1000000)
f.close()
'''

# Read the files 5 times concatenating the contents
# to one HUGE string
now_1 = datetime.datetime.now()
s = ''
for count in range(5):
for i in range(1,11):
print 'Reading: ' + 'Bytes_' + str(i*1000000)
f = open('Bytes_' + str(i*1000000), 'r')
s = s + f.read()
f.close()
print 'Size of s is', len(s)
print 's[274999999] = ' + s[274999999]
now_2 = datetime.datetime.now()
print now_1
print now_2
raw_input('???')
#=======================

The part at the start that is commented out is the part I used to create
the 10 files. The second part prints the following output (abbreviated):

Reading: Bytes_1000000
Size of s is 1000000
Reading: Bytes_2000000
Size of s is 3000000
Reading: Bytes_3000000
Size of s is 6000000
Reading: Bytes_4000000
Size of s is 10000000
Reading: Bytes_5000000
Size of s is 15000000
Reading: Bytes_6000000
Size of s is 21000000
Reading: Bytes_7000000
Size of s is 28000000
Reading: Bytes_8000000
Size of s is 36000000
Reading: Bytes_9000000
Size of s is 45000000
Reading: Bytes_10000000
Size of s is 55000000
<snip>
Reading: Bytes_9000000
Size of s is 265000000
Reading: Bytes_10000000
Size of s is 275000000
s[274999999] = 9
2007-02-22 20:23:09.984000
2007-02-22 20:23:21.515000

As can be seen creating a string of 275 MB reading the parts from the
files took less than 12 seconds. I think this is fast enough, but others
might disagree! ;)

Using the Win Task Manager I can see the process to grow to a little
less than 282 MB when it reaches the raw_input call and to drop to less
than 13 MB a little after I've given some input apparently as a result
of PyScripter doing a GC.

Your situation (hardware, file sizes etc.) may differ so that my
experiment does not correspond it, but this was my 2 cents worth!

HTH,
Jussi
Feb 22 '07 #7
On Feb 22, 10:53 am, a bunch of folks wrote:
Memory is basically free.
This is true if you are simply scanning a file into memory. However,
I'm storing the contents in some in-memory data structures and doing
some data manipulation. This is my speculation:

Several small objects per scanned line get allocated, and then
unreferenced. If the heap is relatively small, GC has to do some work
in order to make space for subsequent scan results. At some point, it
realises it cannot keep up and has to extend the heap. At this point,
VM and physical memory is committed, since it needs to be used. And
this keeps going on. At some point, GC will take a good deal of time
to compact the heap, since I and loading in so much data and creating
a lot of smaller objects.

If I could have a heap that is larger and does not need to be
dynamically extended, then the Python GC could work more efficiently.

Interesting discussion.
Cheers,
Andy
--

Feb 22 '07 #8
On 22 Feb 2007 11:28:52 -0800, Andy Watson <al********@gmail.comwrote:
On Feb 22, 10:53 am, a bunch of folks wrote:
Memory is basically free.

This is true if you are simply scanning a file into memory. However,
I'm storing the contents in some in-memory data structures and doing
some data manipulation. This is my speculation:

Several small objects per scanned line get allocated, and then
unreferenced. If the heap is relatively small, GC has to do some work
in order to make space for subsequent scan results. At some point, it
realises it cannot keep up and has to extend the heap. At this point,
VM and physical memory is committed, since it needs to be used. And
this keeps going on. At some point, GC will take a good deal of time
to compact the heap, since I and loading in so much data and creating
a lot of smaller objects.

If I could have a heap that is larger and does not need to be
dynamically extended, then the Python GC could work more efficiently.
I haven't even looked at Python memory management internals since 2.3,
and not in detail then, so I'm sure someone will correct me in the
case that I am wrong.

However, I believe that this is almost exactly how CPython GC does not
work. CPython is refcounted with a generational GC for cycle
detection. There's a memory pool that is used for object allocation
(more than one, I think, for different types of objects) and those can
be extended but they are not, to my knowledge, compacted.

If you're creating the same small objects for each scanned lines, and
especially if they are tuples or new-style objects with __slots__,
then the memory use for those objects should be more or less constant.
Your memory growth is probably related to the information you're
saving, not to your scanned objects, and since those are long-lived
objects I simple don't see how heap pre-allocation could be helpful
there.
Feb 22 '07 #9
Andy Watson schrieb:
I have an application that scans and processes a bunch of text files.
The content I'm pulling out and holding in memory is at least 200MB.

I'd love to be able to tell the CPython virtual machine that I need a
heap of, say 300MB up front rather than have it grow as needed. I've
had a scan through the archives of comp.lang.python and the python
docs but cannot find a way to do this. Is this possible to configure
the PVM this way?
You can configure your operating system. On Unix, do 'ulimit -m 200000'.

Regards,
Martin
Feb 22 '07 #10
Chris Mellon wrote:
On 22 Feb 2007 11:28:52 -0800, Andy Watson <al********@gmail.comwrote:
>On Feb 22, 10:53 am, a bunch of folks wrote:
>>Memory is basically free.
This is true if you are simply scanning a file into memory. However,
I'm storing the contents in some in-memory data structures and doing
some data manipulation. This is my speculation:

Several small objects per scanned line get allocated, and then
unreferenced. If the heap is relatively small, GC has to do some work
in order to make space for subsequent scan results. At some point, it
realises it cannot keep up and has to extend the heap. At this point,
VM and physical memory is committed, since it needs to be used. And
this keeps going on. At some point, GC will take a good deal of time
to compact the heap, since I and loading in so much data and creating
a lot of smaller objects.

If I could have a heap that is larger and does not need to be
dynamically extended, then the Python GC could work more efficiently.

I haven't even looked at Python memory management internals since 2.3,
and not in detail then, so I'm sure someone will correct me in the
case that I am wrong.

However, I believe that this is almost exactly how CPython GC does not
work. CPython is refcounted with a generational GC for cycle
detection. There's a memory pool that is used for object allocation
(more than one, I think, for different types of objects) and those can
be extended but they are not, to my knowledge, compacted.

If you're creating the same small objects for each scanned lines, and
especially if they are tuples or new-style objects with __slots__,
then the memory use for those objects should be more or less constant.
Your memory growth is probably related to the information you're
saving, not to your scanned objects, and since those are long-lived
objects I simple don't see how heap pre-allocation could be helpful
there.
Python's internal memory management is split:
- allocations up to 256 bytes (the majority of objects) are handled by
a custom allocator, which uses 256kB arenas malloc()ed from the OS on
demand. With 2.5 some additional work was done to allow returning
completely empty arenas to the OS; 2.3 and 2.4 don't return arenas at
all.
- all allocations over 256 bytes, including container objects that are
extended beyond 256 bytes, are made by malloc().

I can't recall off-hand whether the free-list structures for ints (and
floats?) use the Python allocator or direct malloc(); as the free-lists
don't release any entries, I suspect not.

The maximum allocation size and arena size used by the Python allocator
are hard-coded for algorithmic and performance reasons, and cannot be
practically be changed, especially at runtime. No active compaction
takes place in arenas, even with GC. The only time object data is
relocated between arenas is when an object is resized.

If Andy Watson is creating loads of objects that aren't being managed
by Python's allocator (by being larger than 256 bytes, or in a type
free-list), then the platform malloc() behaviour applies. Some platform
allocators can be tuned via environment variables and the like, in which
case review of the platform documentation is indicated.

Some platform allocators are notorious for poor behaviour in certain
circumstances, and coalescing blocks while deallocating is one
particularly nasty problem for code that creates and destroys lots
of small variably sized objects.

--
-------------------------------------------------------------------------
Andrew I MacIntyre "These thoughts are mine alone..."
E-mail: an*****@bullseye.apana.org.au (pref) | Snail: PO Box 370
an*****@pcug.org.au (alt) | Belconnen ACT 2616
Web: http://www.andymac.org/ | Australia
Feb 24 '07 #11
In article <11**********************@v33g2000cwv.googlegroups .com>,
"Andy Watson" <al********@gmail.comwrote:

...
If I could have a heap that is larger and does not need to be
dynamically extended, then the Python GC could work more efficiently.
...

GC! If you're allocating lots of objects and holding on to them, GC
will run frequently, but won't find anything to free. Maybe you want to
turn off GC, at least some of the time? See the GC module, esp.
set_threshold().

Note that the cyclic GC is only really a sort of safety net for
reference loops, as normally objects are free'd when their last
reference is lost.
__________________________________________________ ______________________
TonyN.:' *firstname*nlsnews@georgea*lastname*.com
' <http://www.georgeanelson.com/>
Feb 24 '07 #12

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

14
by: Kevin Grigorenko | last post by:
Hello, I couldn't find an obvious answer to this in the FAQ. My basic question, is: Is there any difference in allocating on the heap versus the stack? If heap or stack implementation is not...
6
by: Dima | last post by:
How can I know size of avaible memory in heap? For example : .... .... // size = N cout << "Size of Heap = " << SizeOfHeap() << endl; int* i = new int; // size = N - sizeof(int) cout << "Size...
11
by: ganesh.kundapur | last post by:
Hi, Is there any way to get the total heap size allocated to a process in c on linux platform. Regards, Ganesh
5
by: sunny | last post by:
Hi All Is there any way to determine stack and heap size, during runtime. i.e can we predict stack overflow. etc
0
by: Raj | last post by:
We are on a db2 v8.2 (fix 8) with DPF & intra parllelism. Below are sort related configuration settings ...
1
by: wishbone34 | last post by:
Hi, I have a question regarding the use of a couple functions I have for an assignment.. first here is the header file that im trying to use...
7
by: Raman | last post by:
Hi All, Could any one tell me, how can I determine/Change size of heap on per- process basis on Unix based systems. Thanks. Regards
5
by: kumarmdb2 | last post by:
Hi guys, For last few days we are getting out of private memory error. We have a development environment. We tried to figure out the problem but we believe that it might be related to the OS...
4
by: ggoubb | last post by:
The purpose of the Insert function is to add a new integer in the Heap assuming that it is not already full. If Heap capacity has been reached, it attempts to double the current capacity. If capacity...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.