By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,870 Members | 1,212 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,870 IT Pros & Developers. It's quick & easy.

Multiple instance of process - memory conflicts

P: n/a
I wonder if anybody could shed some light on a problem I am
encountering.

I have written a program in C that runs on Solaris 2.8. At busy times
of the day there may be multiple instances of it running (5-10), each
process taking approx. 3 seconds to complete.

Each instance of the program basically fetches data from a Oracle
database (using Remedy ARS API routines) and stores it in a user
defined structure that I have defined as a global variable. The
problem I encounter is when multiple instances of the program are
running. Often I find that the data in memory being held in one
process is mixed up with the data in memory of another.

A simplified example could be:

process instance 1 fetches data from record NA1234 and stores record
id in a string variable str1:

process instance 2 fetches data from another record NA9999 and stores
record
id also in a string variable str1:

However when I check the value of var1 in process instance 2, it
sometimes may say NA9999 (which would be correct) but other times it
may say NA1234 (ie. it is sharing the memory space of process 1),
especially when the processes are running in parallel.

Can this phenomena be attributed to global variables - if so changing
to a local instance would relieve this?

Any assistance greatly appreciated.

-- Shuaib
Nov 13 '05 #1
Share this Question
Share on Google+
5 Replies


P: n/a
sr******@yahoo.com (Shuaib) writes:
I wonder if anybody could shed some light on a problem I am
encountering.

I have written a program in C that runs on Solaris 2.8. At busy times
of the day there may be multiple instances of it running (5-10), each
process taking approx. 3 seconds to complete.

Each instance of the program basically fetches data from a Oracle
database (using Remedy ARS API routines) and stores it in a user
defined structure that I have defined as a global variable. The
problem I encounter is when multiple instances of the program are
running. Often I find that the data in memory being held in one
process is mixed up with the data in memory of another.


Are these processes using shared memory? If not, it's impossible to
get a mixup. If they are using shared memory, you will get that sort
of problems, unless you do something to prevent it.

--
Måns Rullgård
mr*@users.sf.net
Nov 13 '05 #2

P: n/a
In article <12**************************@posting.google.com >, Shuaib wrote:
I wonder if anybody could shed some light on a problem I am
encountering.

I have written a program in C that runs on Solaris 2.8. At busy times
of the day there may be multiple instances of it running (5-10), each
process taking approx. 3 seconds to complete.

Each instance of the program basically fetches data from a Oracle
database (using Remedy ARS API routines) and stores it in a user
defined structure that I have defined as a global variable. The
problem I encounter is when multiple instances of the program are
running. Often I find that the data in memory being held in one
process is mixed up with the data in memory of another.

A simplified example could be:

process instance 1 fetches data from record NA1234 and stores record
id in a string variable str1:

process instance 2 fetches data from another record NA9999 and stores
record
id also in a string variable str1:

This is virtually impossible, I'd say;
* A bug somewhere in your program
* You are using threads rather than processes, and have locking
issues
* you use shared memory, and have locking issues.
* oracle or the api you use somehow screw things up for you,
or you havn't thought things enough through, e.g. something
updates your database while you query it and similar
potentional problesm.
Nov 13 '05 #3

P: n/a
Shuaib wrote:
I wonder if anybody could shed some light on a problem I am
encountering.

I have written a program in C that runs on Solaris 2.8. At busy times
of the day there may be multiple instances of it running (5-10), each
process taking approx. 3 seconds to complete.

Each instance of the program basically fetches data from a Oracle
database (using Remedy ARS API routines) and stores it in a user
defined structure that I have defined as a global variable. The
problem I encounter is when multiple instances of the program are
running. Often I find that the data in memory being held in one
process is mixed up with the data in memory of another.


My guess is you've done something like this:

(1) open Oracle connection in parent process
(2) fork()
(3) do Oracle query in children

That may not be a valid way to use the Oracle db client code.
I would try doing this instead:

(1) fork()
(2) open Oracle connection in child
(3) do Oracle query in child

I'm not an Oracle expert, I am only thinking of what might happen
if two instances of the client code are sharing the same initial
values in their data structures and are sharing the TCP connection.

Hope that helps.

- Logan

Nov 13 '05 #4

P: n/a
Logan Shaw wrote:
Shuaib wrote:
I wonder if anybody could shed some light on a problem I am
encountering.

I have written a program in C that runs on Solaris 2.8. At busy times
of the day there may be multiple instances of it running (5-10), each
process taking approx. 3 seconds to complete.

Each instance of the program basically fetches data from a Oracle
database (using Remedy ARS API routines) and stores it in a user
defined structure that I have defined as a global variable. The
problem I encounter is when multiple instances of the program are
running. Often I find that the data in memory being held in one
process is mixed up with the data in memory of another.

My guess is you've done something like this:

(1) open Oracle connection in parent process
(2) fork()
(3) do Oracle query in children

That may not be a valid way to use the Oracle db client code.
I would try doing this instead:

(1) fork()
(2) open Oracle connection in child
(3) do Oracle query in child

I'm not an Oracle expert, I am only thinking of what might happen
if two instances of the client code are sharing the same initial
values in their data structures and are sharing the TCP connection.


Extremely likely.

I have found (painfully!) that even this leads to trouble:

(1) Connect to Oracle in parent
(2) fork
(3) Do nothing related to Oracle in child
(4) Exit child

or this:

(3) Do nothing related to Oracle in parent
(4) Exit parent

Because the Oracle client runtime registers an atexit, just exiting the
process becomes akin to an Oracle request!

Since there does not seem to be any way to unregister an atexit handler,
the only thing that works is to have *NO* Oracle connection at all while
you fork. (There may be something in the most recent Oracle OCI API to
circumvent that problem, but I haven't found it yet)

Hope that helps.

- Logan

--
Michel Bardiaux
Peaktime Belgium S.A. Bd. du Souverain, 191 B-1160 Bruxelles
Tel : +32 2 790.29.41

Nov 13 '05 #5

P: n/a

"Michel Bardiaux" <mb*******@peaktime.be> wrote in message
news:3F**************@peaktime.be...
I have found (painfully!) that even this leads to trouble:

(1) Connect to Oracle in parent
(2) fork
(3) Do nothing related to Oracle in child
(4) Exit child

or this:

(3) Do nothing related to Oracle in parent
(4) Exit parent

Because the Oracle client runtime registers an atexit, just exiting the
process becomes akin to an Oracle request!


That's why the child should not call 'exit'. This same problem exists
with stdio streams.

DS
Nov 13 '05 #6

This discussion thread is closed

Replies have been disabled for this discussion.