By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
446,411 Members | 1,023 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 446,411 IT Pros & Developers. It's quick & easy.

Proper Usage of Shared Memory

P: n/a
When I first discovered shared memory (between multiple processes) I
immediately started thinking of how to build my own VM subsystem +
locking mechanisms for a large single block of memory. This seems like
one option, the other appears to be just having each "object" you want
to share be a shared mem space to itself: allocate objects into a
defined shared mem space. But here you have many many objects being
shared. Having a VM subsystem would allow you to just allocate a large
contiguous chunk of memory and then rely on your program for allocation,
deallocation, spatial locality/coherence and all other matters.

Is this at all a wise idea? Is there any advantage over having a large
number of small shared memory objects? What overhead does having shared
memory objects incur?

Sometimes I think the answer comes down to one merely of locking: that
with a complex vm subsystem mapped on a single flat space you could tune
far better far finer grained locking for your system. How much more is
at stake than simply locking? Obviously difficulty of implementation is
a key factor, but what about technical advantages and disadvantages? is
there a penalty for having thousands of shared memory objects between a
collection of programs?

Myren
Jul 22 '05 #1
Share this Question
Share on Google+
1 Reply


P: n/a
myren, lord wrote:
When I first discovered shared memory (between multiple processes) I
immediately started thinking of how to build my own VM subsystem +
locking mechanisms for a large single block of memory. This seems like
one option, the other appears to be just having each "object" you want
to share be a shared mem space to itself: allocate objects into a
defined shared mem space. But here you have many many objects being
shared. Having a VM subsystem would allow you to just allocate a large
contiguous chunk of memory and then rely on your program for allocation,
deallocation, spatial locality/coherence and all other matters.
First off, this has nothing to do with the C++ language.
Probably a better newsgroup is news:comp.programming.

Is this at all a wise idea?
Maybe, maybe not. You'll have to check your operating system to see
how it handles the memory request. Some OSes already have virtual
memory, so when you allocate a contiguous chunk, it may not be in
phyiscal memory; or another task may be using that memory.

Is there any advantage over having a large
number of small shared memory objects?
Research the topic of "Memory Fragmentation".

What overhead does having shared memory objects incur?
The minimal overhead is using semaphores, signals or mutexes.
One must be sure that two tasks are not writing to the same
memory at the same time.

Sometimes I think the answer comes down to one merely of locking: that
with a complex vm subsystem mapped on a single flat space you could tune
far better far finer grained locking for your system. How much more is
at stake than simply locking? Obviously difficulty of implementation is
a key factor, but what about technical advantages and disadvantages? is
there a penalty for having thousands of shared memory objects between a
collection of programs?

Myren

--
Thomas Matthews

C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.raos.demon.uk/acllc-c++/faq.html
Other sites:
http://www.josuttis.com -- C++ STL Library book

Jul 22 '05 #2

This discussion thread is closed

Replies have been disabled for this discussion.