Here is a slighly better version which avoids breaking the pseudo file
up into lines and parsing all lines. But still Linux only.
/Jean Brouwers
PropICy Semiconductor, Inc.
<pre>
import os
_proc_status = '/proc/%d/status' % os.getpid() # Linux only?
_scale = {'kB': 1024.0, 'mB': 1024.0*1024.0,
'KB': 1024.0, 'MB': 1024.0*1024.0}
def _VmB(VmKey):
'''Private.
'''
global _proc_status, _scale
# get pseudo file /proc/<pid>/status
try:
t = open(_proc_status)
v = t.read()
t.close()
except:
return 0.0 # non-Linux?
# get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
t = v.index(VmKey)
v = v[t:].split(None, 3) # whitespace
if len(v) < 3:
return 0.0 # invalid format?
# convert Vm value to bytes
return float(v[1]) * _scale[v[2]]
def memory(since=0.0):
'''Return memory usage in bytes.
'''
return _VmB('VmSize:') - since
def resident(since=0.0):
'''Return resident memory usage in bytes.
'''
return _VmB('VmRSS:') - since
def stacksize(since=0.0):
'''Return stack size in bytes.
'''
return _VmB('VmStk:') - since
</pre>
In article <09******************************************@no.s pam.net>,
Jean Brouwers <JB***********************@no.spam.net> wrote:
Assuming you are using Linux, below is an example for the memory usage
and stack size of the current process. See the man page for proc for
more details.
/Jean Brouwers
ProphICy Semiconductor, Inc.
<pre>
import os
_proc_status = '/proc/%d/status' % os.getpid() # Linux only?
_scale = {'kB': 1024.0, 'mB': 1024.0*1024.0,
'KB': 1024.0, 'MB': 1024.0*1024.0}
def _VmB(VmKey):
global _scale
try: # get the /proc/<pid>/status pseudo file
t = open(_proc_status)
v = [v for v in t.readlines() if v.startswith(VmKey)]
t.close()
# convert Vm value to bytes
if len(v) == 1:
t = v[0].split() # e.g. 'VmRSS: 9999 kB'
if len(t) == 3: ## and t[0] == VmKey:
return float(t[1]) * _scale.get(t[2], 0.0)
except:
pass
return 0.0
def memory(since=0.0):
'''Return process memory usage in bytes.
'''
return _VmB('VmSize:') - since
def stacksize(since=0.0):
'''Return process stack size in bytes.
'''
return _VmB('VmStk:') - since
</pre>
In article <40*********************@ptn-nntp-reader02.plus.net>, Ian
<in**@fretfarm.co.uk> wrote:
Hi all,
I have a problem. I have an application which needs to work with a lot of
data, but not all at the same time. It is arranged as a set of objects, each
with lots of data that is created when the object is instantiated.
I'd ideally like to keep as many objects as possible in memory, but I can
get rid of any object the program isn't currently using.
Is there any way I can access the amount of memory python is using? I can
then decide when to give up objects to the gc.
I don't want to use weakref because the gc simply collects weakref'ed stuff
whether memory is tight or not, and then I have to recreate it (which is
costly). I only have one internal consumer for the data, and it is only
working with one object at once, but may change to a different object at any
time.
Thanks in advance
Ian.