473,748 Members | 2,380 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Process "Killed"

Hi,

Overview
=======

I'm doing some simple file manipulation work and the process gets
"Killed" everytime I run it. No traceback, no segfault... just the
word "Killed" in the bash shell and the process ends. The first few
batch runs would only succeed with one or two files being processed
(out of 60) before the process was "Killed". Now it makes no
successful progress at all. Just a little processing then "Killed".
Question
=======

Any Ideas? Is there a buffer limitation? Do you think it could be the
filesystem?
Any suggestions appreciated.... Thanks.
The code I'm running:
=============== ===

from glob import glob

def manipFiles():
filePathList = glob('/data/ascii/*.dat')
for filePath in filePathList:
f = open(filePath, 'r')
lines = f.readlines()[2:]
f.close()
f = open(filePath, 'w')
f.writelines(li nes)
f.close()
print file
Sample lines in File:
=============== =

# time, ap, bp, as, bs, price, vol, size, seq, isUpLast, isUpVol,
isCancel

1062993789 0 0 0 0 1022.75 1 1 0 1 0 0
1073883668 1120 1119.75 28 33 0 0 0 0 0 0 0
Other Info
========

- The file sizes range from 76 Kb to 146 Mb
- I'm running on a Gentoo Linux OS
- The filesystem is partitioned and using: XFS for the data
repository, Reiser3 for all else.
Aug 28 '08 #1
7 9410
dieter wrote:
Hi,

Overview
=======

I'm doing some simple file manipulation work and the process gets
"Killed" everytime I run it. No traceback, no segfault... just the
word "Killed" in the bash shell and the process ends. The first few
batch runs would only succeed with one or two files being processed
(out of 60) before the process was "Killed". Now it makes no
successful progress at all. Just a little processing then "Killed".
That isn't a Python thing. Run "sleep 60" in one shell, then "kill -9"
the process in another shell, and you'll get the same message.

I know my shared web host has a daemon that does that to processes that
consume too many resources.

Wait a minute. If you ran this multiple times, won't it have removed the
first two lines from the first files multiple times, deleting some data
you actually care about? I hope you have backups...
Question
=======

Any Ideas? Is there a buffer limitation? Do you think it could be the
filesystem?
Any suggestions appreciated.... Thanks.
The code I'm running:
=============== ===

from glob import glob

def manipFiles():
filePathList = glob('/data/ascii/*.dat')
If that dir is very large, that could be slow. Both because glob will
run a regexp over every filename, and because it will return a list of
every file that matches.

If you have Python 2.5, you could use glob.iglob() instead of
glob.glob(), which returns an iterator instead of a list.
for filePath in filePathList:
f = open(filePath, 'r')
lines = f.readlines()[2:]
This reads the entire file into memory. Even better, I bet slicing
copies the list object temporarily, before the first one is destroyed.
f.close()
f = open(filePath, 'w')
f.writelines(li nes)
f.close()
print file
This is unrelated, but "print file" will just say "<type 'file'>",
because it's the name of a built-in object, and you didn't assign to it
(which you shouldn't anyway).
Actually, if you *only* ran that exact code, it should exit almost
instantly, since it does one import, defines a function, but doesn't
actually call anything. ;-)
Sample lines in File:
=============== =

# time, ap, bp, as, bs, price, vol, size, seq, isUpLast, isUpVol,
isCancel

1062993789 0 0 0 0 1022.75 1 1 0 1 0 0
1073883668 1120 1119.75 28 33 0 0 0 0 0 0 0
Other Info
========

- The file sizes range from 76 Kb to 146 Mb
- I'm running on a Gentoo Linux OS
- The filesystem is partitioned and using: XFS for the data
repository, Reiser3 for all else.
How about this version? (note: untested)

import glob
import os

def manipFiles():
# If you don't have Python 2.5, use "glob.glob" instead.
filePaths = glob.iglob('/data/ascii/*.dat')
for filePath in filePaths:
print filePath
fin = open(filePath, 'rb')
fout = open(filePath + '.out', 'wb')
# Discard two lines
fin.next(); fin.next()
fout.writelines (fin)
fin.close()
fout.close()
os.rename(fileP ath + '.out', filePath)

I don't know how light it will be on CPU, but it should use very little
memory (unless you have some extremely long lines, I guess). You could
write a version that just used .read() and .write() in chunks

Also, it temporarily duplicates "whatever.d at" to "whatever.dat.o ut",
and if "whatever.dat.o ut" already exists, it will blindly overwrite it.

Also, if this is anything but a one-shot script, you should use
"try...fina lly" statements to make sure the file objects get closed (or,
in Python 2.5, the "with" statement).
--
Aug 28 '08 #2
dieter <ve*******@gmai l.comwrites:
I'm doing some simple file manipulation work and the process gets
"Killed" everytime I run it. No traceback, no segfault... just the
word "Killed" in the bash shell and the process ends. The first few
batch runs would only succeed with one or two files being processed
(out of 60) before the process was "Killed". Now it makes no
successful progress at all. Just a little processing then "Killed".

Any Ideas? Is there a buffer limitation? Do you think it could be the
filesystem?
Any suggestions appreciated.... Thanks.

The code I'm running:
=============== ===

from glob import glob

def manipFiles():
filePathList = glob('/data/ascii/*.dat')
for filePath in filePathList:
f = open(filePath, 'r')
lines = f.readlines()[2:]
f.close()
f = open(filePath, 'w')
f.writelines(li nes)
f.close()
print file
Have you checked memory usage while your program is running? Your

lines = f.readlines()[2:]

statement will need almost twice the memory of your largest file. This
might be a problem, depending on your RAM and what else is running at the
same time.

If you want to reduce memory usage to almost zero, try reading lines from
the file and writing all but the first two to a temporary file, then
renaming the temp file to the original:

import os

infile = open(filePath, 'r')
outfile = open(filePath + '.bak', 'w')

for num, line in enumerate(infil e):
if num >= 2:
outfile.write(l ine)

infile.close()
outfile.close()
os.rename(fileP ath + '.bak', filePath)

Glenn
Aug 29 '08 #3
On 28 Aug, 07:30, dieter <vel.ac...@gmai l.comwrote:
>
I'm doing some simple file manipulation work and the process gets
"Killed" everytime I run it. No traceback, no segfault... just the
word "Killed" in the bash shell and the process ends. The first few
batch runs would only succeed with one or two files being processed
(out of 60) before the process was "Killed". Now it makes no
successful progress at all. Just a little processing then "Killed".
It might be interesting to check the various limits in your shell. Try
this command:

ulimit -a

Documentation can found in the bash manual page. The limits include
memory size, CPU time, open file descriptors, and a few other things.

Paul
Aug 29 '08 #4
dieter wrote:
Any Ideas? Is there a buffer limitation? Do you think it could be the
filesystem?
what does "ulimit -a" say?

</F>

Aug 29 '08 #5
Glenn Hutchings wrote:
Have you checked memory usage while your program is running? Your

lines = f.readlines()[2:]

statement will need almost twice the memory of your largest file.
footnote: list objects contain references to string objects, not the
strings themselves. the above temporarily creates two list objects, but
the actual file content is only stored once.

</F>

Aug 29 '08 #6
I'm doing some simple file manipulation work and the process gets
"Killed" everytime I run it. No traceback, no segfault... just the
word "Killed" in the bash shell and the process ends. The first few
batch runs would only succeed with one or two files being processed
(out of 60) before the process was "Killed". Now it makes no
successful progress at all. Just a little processing then "Killed".
This is the behavior you'll see when your os has run out of some
memory resource. The kernel sends a 9 signal. I'm pretty sure that
if you exceed a soft limit your program will abort with out of memory
error.

Eric
Aug 30 '08 #7
On Sat, Aug 30, 2008 at 11:07 AM, Eric Wertman <ew******@gmail .comwrote:
>I'm doing some simple file manipulation work and the process gets
"Killed" everytime I run it. No traceback, no segfault... just the
word "Killed" in the bash shell and the process ends. The first few
batch runs would only succeed with one or two files being processed
(out of 60) before the process was "Killed". Now it makes no
successful progress at all. Just a little processing then "Killed".

This is the behavior you'll see when your os has run out of some
memory resource. The kernel sends a 9 signal. I'm pretty sure that
if you exceed a soft limit your program will abort with out of memory
error.

Eric
Eric, thank you very much for your response.
Sep 2 '08 #8

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

19
6485
by: Jane Austine | last post by:
As far as I know python's threading module models after Java's. However, I can't find something equivalent to Java's interrupt and isInterrupted methods, along with InterruptedException. "somethread.interrupt()" will wake somethread up when it's in sleeping/waiting state. Is there any way of doing this with python's thread? I suppose thread interrupt is a very primitive functionality for stopping a blocked thread.
10
9779
by: Fred | last post by:
There is a setting in INIT.ORA that has the unintended side-effect of making sure the ALTER SYSTEM KILL SESSION command has immediate affect. Without this setting, I've seen some instances where the session reports as being KILLED in V$SESSION but is not physically removed until the instance is bounced. Does anyone remember this value offhand?
10
14051
by: eyh5 | last post by:
Hi, My C code (running on Soalris Unix) has some "segmentation fault" that I wish to use purify to do it. I poked around the web, and found some information about adding some lines in a Makefile file to use purify. However, my code is a rather simple single-source C program, and I didn't write a Makefile for it. I'm wondering if anybody can tell me which commands are to be entered at the Unix prompt to use purify. And, I don't know if...
12
4127
by: Jose Fernandez | last post by:
Hello. I'm building a web service and I get this error. NEWS.News.CoverNews(string)': not all code paths return a value This is the WebMethod public SqlDataReader CoverNews(string Sport) {
1
3171
by: David zhu | last post by:
fatal error CS0042: Unexpected error creating debug information file C:\Inetpub\wwwroot\HSBCS\Pages\HsWebBase\obj\Debug\HsWebBas e.PDB'-- C:\Inetpub\wwwroot\HSBCS\Pages\HsWebBase\obj\Debug\HsWebBas e.pdb: The process cannot access the file because it is being used by another process.
0
3744
by: Tom Bower | last post by:
In the Windows Task Manager if I select a Process and right-click, I can choose to "End Process" or "End Process Tree." Is there a VB equivalent for "End Process Tree" if you have a handle to a process? I know about process.Kill and process.CloseMainWindow, but how would I do the equivalent of "End Process Tree" where any child or associated processes are also killed? How do you know what processes are considered to be part of the...
4
6210
by: AliRezaGoogle | last post by:
Dear Members I have a critical program written in c#. It runs every time the user log on. But sometimes log-oned user forces my application process to quit by “End Process” in task manager. How can I be notified whenever the process is killed by task manager? I want to keep my process always alive. At least after it is killed, I want it to be restarted automatically. Is there any way?
18
1953
by: 200dogz | last post by:
Hi, I have a aspx page that generates reports with the data it gets from databases. It used to work quick and fine until recently when a file is generated it gets killed a few seconds after that. It can still be downloaded if fast enough: http://img17.imageshack.us/img17/2913/59537630wo2.png A few seconds later...
0
8995
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8832
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9558
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
9253
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8250
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
6798
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
4879
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3316
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
2791
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.