By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
458,172 Members | 1,751 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 458,172 IT Pros & Developers. It's quick & easy.

Inter-process communication

P: n/a
We're looking at running a memory-intensive process for a web site as a
Windows service in isolation of IIS because IIS refuses to consume all of
the available physical RAM. Considering remoting to move data in and out of
this process. Need something that's quick and dirty and easy to implement,
but that's performant and secure at the same time. Any suggestions /
tutorials? Would prefer not to go on the TCP/IP stack (socket) as it is not
very performant, but it certainly is quick and dirty and we might go with it
anyway anyway unless there is another way with shared memory that is as easy
and more performant.

Jon
Apr 5 '07 #1
Share this Question
Share on Google+
28 Replies


P: n/a
Jon,

I think that you might want to consider shared memory in this case,
assuming you want to be on the same machine as IIS (although, I have to
question why you would want to starve that machine, and not dedicate another
machine to performing this task, as you run the risk of starving IIS of
resources).

Are you passing massive amounts of data between the processes? If so, I
can't say remoting is a good solution. With remoting, you can marshal
objects by reference, or by value. When passing your massive data buffer
across the app domain boundary, if you pass this by value, you are going to
incur a huge cost in passing that buffer across the app domain boundary.

If your buffer has an affinity to the app domain it is in (derives from
MarshalByRefObject) then you can make calls into the object from the remote
process, but depending on how many calls you have to make to get managable
chunks of data, this might be too expensive as well.

I think that a better solution would be to have a separate machine which
is dedicated to this task, and then sending off the data buffers (or chunks
of them) to the machine to be processed. You can use MSMQ for this, or
maybe even a file drop, in which case, you have something like BizTalk pick
up the file drop. You could use WCF as well, as there is support for large
message sizes (although there is a message buffer limit there as well which
you have to tweak if the buffer is exceptionally large).

Which comes back to shared memory. If you are determined to stay on the
same machine, then you can have the IIS process write to shared memory, then
signal the service to look at a particular block of shared memory to
process. Of course, you will have to write all the coordination routines
yourself (which is going to be a pain as well).

Hope this helps.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:ea**************@TK2MSFTNGP06.phx.gbl...
We're looking at running a memory-intensive process for a web site as a
Windows service in isolation of IIS because IIS refuses to consume all of
the available physical RAM. Considering remoting to move data in and out
of this process. Need something that's quick and dirty and easy to
implement, but that's performant and secure at the same time. Any
suggestions / tutorials? Would prefer not to go on the TCP/IP stack
(socket) as it is not very performant, but it certainly is quick and dirty
and we might go with it anyway anyway unless there is another way with
shared memory that is as easy and more performant.

Jon


Apr 5 '07 #2

P: n/a
The memory load could be in the range of 1GB, basically hosting indexes
in-memory for fast access to search thousands of large pieces of data. The
server has 4GB, but IIS never uses more than 1GB, which leaves us 3GB
unused, and also makes IIS vulnerable to running out of RAM if we were to
fill up its tiny 1GB rather than isolate the process.

Would love to offload to another server, but the problem there becomes the
bottleneck of 1gb/s network bandwidth which is more reserved for the other
users who are doing heavy SQL Server queries (and SQL Server is not nearly
as performant for what we are indexing, difference is like 10ms vs. 500ms).
We also then deal with TCP/IP packet encapsulation which is a huge
performance hit.

Shared memory is of course ideal. Problem is I asked about shared memory in
the .NET world a year or two ago and was told it's not possible in the C#
world, you have to use remoting. Or, use C++ (and native APIs) which I am
not privvy to, although if someone can point me to P/Invoke API tutorials
that are relevant to shared memory with C#/.NET I'd be curious.

Jon

"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote in
message news:ua**************@TK2MSFTNGP05.phx.gbl...
Jon,

I think that you might want to consider shared memory in this case,
assuming you want to be on the same machine as IIS (although, I have to
question why you would want to starve that machine, and not dedicate
another machine to performing this task, as you run the risk of starving
IIS of resources).

Are you passing massive amounts of data between the processes? If so,
I can't say remoting is a good solution. With remoting, you can marshal
objects by reference, or by value. When passing your massive data buffer
across the app domain boundary, if you pass this by value, you are going
to incur a huge cost in passing that buffer across the app domain
boundary.

If your buffer has an affinity to the app domain it is in (derives from
MarshalByRefObject) then you can make calls into the object from the
remote process, but depending on how many calls you have to make to get
managable chunks of data, this might be too expensive as well.

I think that a better solution would be to have a separate machine
which is dedicated to this task, and then sending off the data buffers (or
chunks of them) to the machine to be processed. You can use MSMQ for
this, or maybe even a file drop, in which case, you have something like
BizTalk pick up the file drop. You could use WCF as well, as there is
support for large message sizes (although there is a message buffer limit
there as well which you have to tweak if the buffer is exceptionally
large).

Which comes back to shared memory. If you are determined to stay on
the same machine, then you can have the IIS process write to shared
memory, then signal the service to look at a particular block of shared
memory to process. Of course, you will have to write all the coordination
routines yourself (which is going to be a pain as well).

Hope this helps.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:ea**************@TK2MSFTNGP06.phx.gbl...
>We're looking at running a memory-intensive process for a web site as a
Windows service in isolation of IIS because IIS refuses to consume all of
the available physical RAM. Considering remoting to move data in and out
of this process. Need something that's quick and dirty and easy to
implement, but that's performant and secure at the same time. Any
suggestions / tutorials? Would prefer not to go on the TCP/IP stack
(socket) as it is not very performant, but it certainly is quick and
dirty and we might go with it anyway anyway unless there is another way
with shared memory that is as easy and more performant.

Jon



Apr 5 '07 #3

P: n/a
Jon,

I think that shared memory is very viable. You will have to code it
yourself though, and use a fair amount of P/Invoke. First, I recommend
reading the section of the MSDN documentation titled "Managing Memory-Mapped
Files in Win32", located at:

http://msdn2.microsoft.com/en-us/library/ms810613.aspx

For working with MMFs in .NET, I would recommend creating a class that
derives from Stream which would allow you to work with the MMF. Basically,
you would have the file that you are using as the MMF, and then you would
call the MapViewOfFileEx API function and get the pointer at which you can
start writing. You can then take whereever the user wants to read from
/write to the stream and then offset that value by the pointer returned from
MapViewOfFileEx to find the memory location to read from/write to.

--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:u0**************@TK2MSFTNGP02.phx.gbl...
The memory load could be in the range of 1GB, basically hosting indexes
in-memory for fast access to search thousands of large pieces of data. The
server has 4GB, but IIS never uses more than 1GB, which leaves us 3GB
unused, and also makes IIS vulnerable to running out of RAM if we were to
fill up its tiny 1GB rather than isolate the process.

Would love to offload to another server, but the problem there becomes the
bottleneck of 1gb/s network bandwidth which is more reserved for the other
users who are doing heavy SQL Server queries (and SQL Server is not nearly
as performant for what we are indexing, difference is like 10ms vs.
500ms). We also then deal with TCP/IP packet encapsulation which is a huge
performance hit.

Shared memory is of course ideal. Problem is I asked about shared memory
in the .NET world a year or two ago and was told it's not possible in the
C# world, you have to use remoting. Or, use C++ (and native APIs) which I
am not privvy to, although if someone can point me to P/Invoke API
tutorials that are relevant to shared memory with C#/.NET I'd be curious.

Jon

"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote
in message news:ua**************@TK2MSFTNGP05.phx.gbl...
>Jon,

I think that you might want to consider shared memory in this case,
assuming you want to be on the same machine as IIS (although, I have to
question why you would want to starve that machine, and not dedicate
another machine to performing this task, as you run the risk of starving
IIS of resources).

Are you passing massive amounts of data between the processes? If so,
I can't say remoting is a good solution. With remoting, you can marshal
objects by reference, or by value. When passing your massive data buffer
across the app domain boundary, if you pass this by value, you are going
to incur a huge cost in passing that buffer across the app domain
boundary.

If your buffer has an affinity to the app domain it is in (derives
from MarshalByRefObject) then you can make calls into the object from the
remote process, but depending on how many calls you have to make to get
managable chunks of data, this might be too expensive as well.

I think that a better solution would be to have a separate machine
which is dedicated to this task, and then sending off the data buffers
(or chunks of them) to the machine to be processed. You can use MSMQ for
this, or maybe even a file drop, in which case, you have something like
BizTalk pick up the file drop. You could use WCF as well, as there is
support for large message sizes (although there is a message buffer limit
there as well which you have to tweak if the buffer is exceptionally
large).

Which comes back to shared memory. If you are determined to stay on
the same machine, then you can have the IIS process write to shared
memory, then signal the service to look at a particular block of shared
memory to process. Of course, you will have to write all the
coordination routines yourself (which is going to be a pain as well).

Hope this helps.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:ea**************@TK2MSFTNGP06.phx.gbl...
>>We're looking at running a memory-intensive process for a web site as a
Windows service in isolation of IIS because IIS refuses to consume all
of the available physical RAM. Considering remoting to move data in and
out of this process. Need something that's quick and dirty and easy to
implement, but that's performant and secure at the same time. Any
suggestions / tutorials? Would prefer not to go on the TCP/IP stack
(socket) as it is not very performant, but it certainly is quick and
dirty and we might go with it anyway anyway unless there is another way
with shared memory that is as easy and more performant.

Jon




Apr 5 '07 #4

P: n/a
I forgot to mention that you will have to send signals, through
remoting, or some other technology, to tell the other process that is
sharing the memory mapped file with you when to process, when it's done,
etc, etc.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote in
message news:u4****************@TK2MSFTNGP03.phx.gbl...
Jon,

I think that shared memory is very viable. You will have to code it
yourself though, and use a fair amount of P/Invoke. First, I recommend
reading the section of the MSDN documentation titled "Managing
Memory-Mapped Files in Win32", located at:

http://msdn2.microsoft.com/en-us/library/ms810613.aspx

For working with MMFs in .NET, I would recommend creating a class that
derives from Stream which would allow you to work with the MMF.
Basically, you would have the file that you are using as the MMF, and then
you would call the MapViewOfFileEx API function and get the pointer at
which you can start writing. You can then take whereever the user wants
to read from /write to the stream and then offset that value by the
pointer returned from MapViewOfFileEx to find the memory location to read
from/write to.

--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:u0**************@TK2MSFTNGP02.phx.gbl...
>The memory load could be in the range of 1GB, basically hosting indexes
in-memory for fast access to search thousands of large pieces of data.
The server has 4GB, but IIS never uses more than 1GB, which leaves us 3GB
unused, and also makes IIS vulnerable to running out of RAM if we were to
fill up its tiny 1GB rather than isolate the process.

Would love to offload to another server, but the problem there becomes
the bottleneck of 1gb/s network bandwidth which is more reserved for the
other users who are doing heavy SQL Server queries (and SQL Server is not
nearly as performant for what we are indexing, difference is like 10ms
vs. 500ms). We also then deal with TCP/IP packet encapsulation which is a
huge performance hit.

Shared memory is of course ideal. Problem is I asked about shared memory
in the .NET world a year or two ago and was told it's not possible in the
C# world, you have to use remoting. Or, use C++ (and native APIs) which I
am not privvy to, although if someone can point me to P/Invoke API
tutorials that are relevant to shared memory with C#/.NET I'd be curious.

Jon

"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote
in message news:ua**************@TK2MSFTNGP05.phx.gbl...
>>Jon,

I think that you might want to consider shared memory in this case,
assuming you want to be on the same machine as IIS (although, I have to
question why you would want to starve that machine, and not dedicate
another machine to performing this task, as you run the risk of starving
IIS of resources).

Are you passing massive amounts of data between the processes? If
so, I can't say remoting is a good solution. With remoting, you can
marshal objects by reference, or by value. When passing your massive
data buffer across the app domain boundary, if you pass this by value,
you are going to incur a huge cost in passing that buffer across the app
domain boundary.

If your buffer has an affinity to the app domain it is in (derives
from MarshalByRefObject) then you can make calls into the object from
the remote process, but depending on how many calls you have to make to
get managable chunks of data, this might be too expensive as well.

I think that a better solution would be to have a separate machine
which is dedicated to this task, and then sending off the data buffers
(or chunks of them) to the machine to be processed. You can use MSMQ
for this, or maybe even a file drop, in which case, you have something
like BizTalk pick up the file drop. You could use WCF as well, as there
is support for large message sizes (although there is a message buffer
limit there as well which you have to tweak if the buffer is
exceptionally large).

Which comes back to shared memory. If you are determined to stay on
the same machine, then you can have the IIS process write to shared
memory, then signal the service to look at a particular block of shared
memory to process. Of course, you will have to write all the
coordination routines yourself (which is going to be a pain as well).

Hope this helps.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:ea**************@TK2MSFTNGP06.phx.gbl...
We're looking at running a memory-intensive process for a web site as a
Windows service in isolation of IIS because IIS refuses to consume all
of the available physical RAM. Considering remoting to move data in and
out of this process. Need something that's quick and dirty and easy to
implement, but that's performant and secure at the same time. Any
suggestions / tutorials? Would prefer not to go on the TCP/IP stack
(socket) as it is not very performant, but it certainly is quick and
dirty and we might go with it anyway anyway unless there is another way
with shared memory that is as easy and more performant.

Jon




Apr 5 '07 #5

P: n/a
Hello!
You wrote on Thu, 5 Apr 2007 10:32:44 -0700:

JDShared memory is of course ideal. Problem is I asked about shared
JDmemory in the .NET world a year or two ago and was told it's not
JDpossible in the C# world, you have to use remoting. Or, use C++ (and
JDnative APIs) which I am not privvy to, although if someone can point me
JDto P/Invoke API tutorials that are relevant to shared memory with
JDC#/.NET I'd be curious.

Check MsgConnect ( http://www.msgconnect.com ), it seems to fit your
requirements.

With best regards,
Eugene Mayevski
http://www.SecureBlackbox.com - the comprehensive component suite for
network security

Apr 5 '07 #6

P: n/a
What about System.Runtime.Remoting.Channels.Ipc (Named Pipes implementation
for .NET 2.0) .. is MMF easier or faster than Ipc?

Jon

"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote in
message news:u4****************@TK2MSFTNGP03.phx.gbl...
Jon,

I think that shared memory is very viable. You will have to code it
yourself though, and use a fair amount of P/Invoke. First, I recommend
reading the section of the MSDN documentation titled "Managing
Memory-Mapped Files in Win32", located at:

http://msdn2.microsoft.com/en-us/library/ms810613.aspx

For working with MMFs in .NET, I would recommend creating a class that
derives from Stream which would allow you to work with the MMF.
Basically, you would have the file that you are using as the MMF, and then
you would call the MapViewOfFileEx API function and get the pointer at
which you can start writing. You can then take whereever the user wants
to read from /write to the stream and then offset that value by the
pointer returned from MapViewOfFileEx to find the memory location to read
from/write to.

--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:u0**************@TK2MSFTNGP02.phx.gbl...
>The memory load could be in the range of 1GB, basically hosting indexes
in-memory for fast access to search thousands of large pieces of data.
The server has 4GB, but IIS never uses more than 1GB, which leaves us 3GB
unused, and also makes IIS vulnerable to running out of RAM if we were to
fill up its tiny 1GB rather than isolate the process.

Would love to offload to another server, but the problem there becomes
the bottleneck of 1gb/s network bandwidth which is more reserved for the
other users who are doing heavy SQL Server queries (and SQL Server is not
nearly as performant for what we are indexing, difference is like 10ms
vs. 500ms). We also then deal with TCP/IP packet encapsulation which is a
huge performance hit.

Shared memory is of course ideal. Problem is I asked about shared memory
in the .NET world a year or two ago and was told it's not possible in the
C# world, you have to use remoting. Or, use C++ (and native APIs) which I
am not privvy to, although if someone can point me to P/Invoke API
tutorials that are relevant to shared memory with C#/.NET I'd be curious.

Jon

"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote
in message news:ua**************@TK2MSFTNGP05.phx.gbl...
>>Jon,

I think that you might want to consider shared memory in this case,
assuming you want to be on the same machine as IIS (although, I have to
question why you would want to starve that machine, and not dedicate
another machine to performing this task, as you run the risk of starving
IIS of resources).

Are you passing massive amounts of data between the processes? If
so, I can't say remoting is a good solution. With remoting, you can
marshal objects by reference, or by value. When passing your massive
data buffer across the app domain boundary, if you pass this by value,
you are going to incur a huge cost in passing that buffer across the app
domain boundary.

If your buffer has an affinity to the app domain it is in (derives
from MarshalByRefObject) then you can make calls into the object from
the remote process, but depending on how many calls you have to make to
get managable chunks of data, this might be too expensive as well.

I think that a better solution would be to have a separate machine
which is dedicated to this task, and then sending off the data buffers
(or chunks of them) to the machine to be processed. You can use MSMQ
for this, or maybe even a file drop, in which case, you have something
like BizTalk pick up the file drop. You could use WCF as well, as there
is support for large message sizes (although there is a message buffer
limit there as well which you have to tweak if the buffer is
exceptionally large).

Which comes back to shared memory. If you are determined to stay on
the same machine, then you can have the IIS process write to shared
memory, then signal the service to look at a particular block of shared
memory to process. Of course, you will have to write all the
coordination routines yourself (which is going to be a pain as well).

Hope this helps.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:ea**************@TK2MSFTNGP06.phx.gbl...
We're looking at running a memory-intensive process for a web site as a
Windows service in isolation of IIS because IIS refuses to consume all
of the available physical RAM. Considering remoting to move data in and
out of this process. Need something that's quick and dirty and easy to
implement, but that's performant and secure at the same time. Any
suggestions / tutorials? Would prefer not to go on the TCP/IP stack
(socket) as it is not very performant, but it certainly is quick and
dirty and we might go with it anyway anyway unless there is another way
with shared memory that is as easy and more performant.

Jon




Apr 5 '07 #7

P: n/a
This looks rediculously straightforward.

http://www.developer.com/net/vb/arti...0926_3520891_2

Jon
Apr 5 '07 #8

P: n/a

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:us**************@TK2MSFTNGP04.phx.gbl...
This looks rediculously straightforward.

http://www.developer.com/net/vb/arti...0926_3520891_2
Erm, that's Part 2 of a two-part, rediculously straightforward article.

http://www.developer.com/net/vb/arti...0926_3520891_1
Apr 5 '07 #9

P: n/a
Jon,

You could use that, but in the end, you have to consider how much data
you are going to push across this pipe. MMF might be better if you have to
access that data, but for signalling between the two applications, I would
go with remoting, or, if you can use .NET 3.0 (which is 2.0 with some
additional class libraries) then, you should use WCF.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:%2****************@TK2MSFTNGP05.phx.gbl...
>
"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:us**************@TK2MSFTNGP04.phx.gbl...
>This looks rediculously straightforward.

http://www.developer.com/net/vb/arti...0926_3520891_2

Erm, that's Part 2 of a two-part, rediculously straightforward article.

http://www.developer.com/net/vb/arti...0926_3520891_1

Apr 5 '07 #10

P: n/a
I see what you're saying, per your original reply. I guess I need to do some
tests to see what impact MarshalByRef implementation, for remoting, would
have. The problem is the level of effort for MMF, which isn't native to .NET
(or is it?).

We are performing search queries between the IIS process and the indexing
process, so the client/server model works fine for us (better, actually).
The compromise of ease of implementation w/ MarshalByRef for Ipc vs. the
performance overhead of TCP sockets might be legitimate enough to go with
IPC remoting over named pipes.

But if you can find a sample of MMF that's straightforward, obviously not
necessarily as straightforward as the link I just found and posted but
relatively readable, I'd be most grateful.

I'm really not interested at this time in pursuing the purchase of
third-party middleware. Free and open-source, maybe, but one would think
Microsoft would get this stuff to work right out of the .NET box.

Jon
"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote in
message news:On**************@TK2MSFTNGP04.phx.gbl...
Jon,

You could use that, but in the end, you have to consider how much data
you are going to push across this pipe. MMF might be better if you have
to access that data, but for signalling between the two applications, I
would go with remoting, or, if you can use .NET 3.0 (which is 2.0 with
some additional class libraries) then, you should use WCF.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:%2****************@TK2MSFTNGP05.phx.gbl...
>>
"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:us**************@TK2MSFTNGP04.phx.gbl...
>>This looks rediculously straightforward.

http://www.developer.com/net/vb/arti...0926_3520891_2

Erm, that's Part 2 of a two-part, rediculously straightforward article.

http://www.developer.com/net/vb/arti...0926_3520891_1


Apr 5 '07 #11

P: n/a
On Thu, 05 Apr 2007 10:32:44 -0700, Jon Davis
<jo*@REMOVE.ME.PLEASE.jondavis.netwrote:
The memory load could be in the range of 1GB, basically hosting indexes
in-memory for fast access to search thousands of large pieces of data.
The server has 4GB, but IIS never uses more than 1GB, which leaves us 3GB
unused, and also makes IIS vulnerable to running out of RAM if we were to
fill up its tiny 1GB rather than isolate the process.
This might be a dumb question, but...

Are you using a 64-bit version of IIS on a 64-bit version of Windows?

In 32-bit land, it wouldn't surprise me at all to find a practical limit
of around 1GB for a single process. The address space itself available to
the process is only 2GB, and with fragmentation and other issues, a
process may very well not be able to allocate more than about 1GB of large
things.

If you're running into a 32-bit Windows issue, it's not clear to me that
you'll be able to do much better than IIS is already doing for you.

If this is all under 64-bit Windows then I agree IIS should work better
than that, and none of the above is relevant.

Pete
Apr 5 '07 #12

P: n/a

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Thu, 05 Apr 2007 10:32:44 -0700, Jon Davis
<jo*@REMOVE.ME.PLEASE.jondavis.netwrote:
>The memory load could be in the range of 1GB, basically hosting indexes
in-memory for fast access to search thousands of large pieces of data.
The server has 4GB, but IIS never uses more than 1GB, which leaves us 3GB
unused, and also makes IIS vulnerable to running out of RAM if we were to
fill up its tiny 1GB rather than isolate the process.

This might be a dumb question, but...

Are you using a 64-bit version of IIS on a 64-bit version of Windows?

In 32-bit land, it wouldn't surprise me at all to find a practical limit
of around 1GB for a single process. The address space itself available to
the process is only 2GB, and with fragmentation and other issues, a
process may very well not be able to allocate more than about 1GB of large
things.

If you're running into a 32-bit Windows issue, it's not clear to me that
you'll be able to do much better than IIS is already doing for you.

If this is all under 64-bit Windows then I agree IIS should work better
than that, and none of the above is relevant.

Pete
No, but 1GB for IIS + 1GB for a service app = 2GB utilization and 1GB per
process, with another 1/2 to 1 GB physical RAM readily available on a 4GB
box. Whereas, 1GB max for IIS alone w/ the service process inside IIS = 1/2
GB for the service process and 1/2 for the web app (assuming that memory
utilization was taken up proporationately). Moving the process out seems to
be a no-brainer.

I have heard that 64-bit for IIS is pointlessly ineffective. Whereas, 64-bit
for SQL Server was a blazing improvement both for speed and for memory
scalability.

Jon
Apr 6 '07 #13

P: n/a
Jon,
In this article
http://www.eggheadcafe.com/articles/20050116.asp
I did some "proof of concept" work with MMF's using the MetalWrench Toolbox
assembly, a copy of which is in the bin/debug folder of the associated
download.

It was the only one I tested that held up consistently under heavy load.
Peter

--
Site: http://www.eggheadcafe.com
UnBlog: http://petesbloggerama.blogspot.com
Short urls & more: http://ittyurl.net


"Jon Davis" wrote:
I see what you're saying, per your original reply. I guess I need to do some
tests to see what impact MarshalByRef implementation, for remoting, would
have. The problem is the level of effort for MMF, which isn't native to .NET
(or is it?).

We are performing search queries between the IIS process and the indexing
process, so the client/server model works fine for us (better, actually).
The compromise of ease of implementation w/ MarshalByRef for Ipc vs. the
performance overhead of TCP sockets might be legitimate enough to go with
IPC remoting over named pipes.

But if you can find a sample of MMF that's straightforward, obviously not
necessarily as straightforward as the link I just found and posted but
relatively readable, I'd be most grateful.

I'm really not interested at this time in pursuing the purchase of
third-party middleware. Free and open-source, maybe, but one would think
Microsoft would get this stuff to work right out of the .NET box.

Jon
"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote in
message news:On**************@TK2MSFTNGP04.phx.gbl...
Jon,

You could use that, but in the end, you have to consider how much data
you are going to push across this pipe. MMF might be better if you have
to access that data, but for signalling between the two applications, I
would go with remoting, or, if you can use .NET 3.0 (which is 2.0 with
some additional class libraries) then, you should use WCF.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:%2****************@TK2MSFTNGP05.phx.gbl...
>
"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:us**************@TK2MSFTNGP04.phx.gbl...
This looks rediculously straightforward.

http://www.developer.com/net/vb/arti...0926_3520891_2

Erm, that's Part 2 of a two-part, rediculously straightforward article.

http://www.developer.com/net/vb/arti...0926_3520891_1


Apr 6 '07 #14

P: n/a
On Thu, 05 Apr 2007 16:54:51 -0700, Jon Davis
<jo*@REMOVE.ME.PLEASE.jondavis.netwrote:
[...]
No, but 1GB for IIS + 1GB for a service app = 2GB utilization and 1GB
per process, with another 1/2 to 1 GB physical RAM readily available on
a 4GB
box. Whereas, 1GB max for IIS alone w/ the service process inside IIS =
1/2 GB for the service process and 1/2 for the web app (assuming that
memory
utilization was taken up proporationately). Moving the process out seems
to be a no-brainer.
Unless you run into performance and/or code maintenance problems doing
so. :)

At the very least, the fact that you're using Win32 seems to explain to me
the behavior you're seeing. It's not that IIS refuses to use more
physical RAM. It's likely simply that it can't.
I have heard that 64-bit for IIS is pointlessly ineffective. Whereas,
64-bit for SQL Server was a blazing improvement both for speed and for
memory scalability.
I have heard that people hear a lot of things about software that turn out
to not be true. :)

Seriously though, until you've tried it you won't really know whether
64-bit IIS would help you or not. Even in some tests 64-bit IIS doesn't
improve performance, that doesn't tell you very much unless you know how
those tests were run and whether they even had a theoretical chance to
consume more than one or two gigabytes of memory. I can easily see how,
for a lot of web server scenarios, virtual or physical memory just doesn't
play a big part in performance.

I don't know what your specific scenario is or what sort of resources you
have to apply to the problem. But if it were me, I'd try my application
under 64-bit Windows with 64-bit IIS to see if that was enough to gain the
benefit in performance I was looking for. I'd do that before I spent a
lot of time trying to work around apparent deficiencies in the 32-bit
paradigm, especially given that 64-bit is here now and will be dominant in
the near future (especially for server stuff).

Pete
Apr 6 '07 #15

P: n/a
"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:%2****************@TK2MSFTNGP06.phx.gbl...
>
"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
>On Thu, 05 Apr 2007 10:32:44 -0700, Jon Davis <jo*@REMOVE.ME.PLEASE.jondavis.netwrote:
>>The memory load could be in the range of 1GB, basically hosting indexes
in-memory for fast access to search thousands of large pieces of data. The server has
4GB, but IIS never uses more than 1GB, which leaves us 3GB
unused, and also makes IIS vulnerable to running out of RAM if we were to
fill up its tiny 1GB rather than isolate the process.

This might be a dumb question, but...

Are you using a 64-bit version of IIS on a 64-bit version of Windows?

In 32-bit land, it wouldn't surprise me at all to find a practical limit of around 1GB
for a single process. The address space itself available to the process is only 2GB, and
with fragmentation and other issues, a process may very well not be able to allocate more
than about 1GB of large things.

If you're running into a 32-bit Windows issue, it's not clear to me that you'll be able
to do much better than IIS is already doing for you.

If this is all under 64-bit Windows then I agree IIS should work better than that, and
none of the above is relevant.

Pete

No, but 1GB for IIS + 1GB for a service app = 2GB utilization and 1GB per process, with
another 1/2 to 1 GB physical RAM readily available on a 4GB box. Whereas, 1GB max for IIS
alone w/ the service process inside IIS = 1/2 GB for the service process and 1/2 for the
web app (assuming that memory utilization was taken up proporationately). Moving the
process out seems to be a no-brainer.

I have heard that 64-bit for IIS is pointlessly ineffective. Whereas, 64-bit for SQL
Server was a blazing improvement both for speed and for memory scalability.

Jon



First of all, asp.net does NOT run in the IIS process space, each asp.net program runs in a
process separated from IIS.
Second, there is no such 1GB limit for asp.net either, to me, it looks like you are
allocating a single array of (whatever type), and here you are limited by the largest chunk
of contiguous memory available in the process space at the moment of allocation. This chunk
is ~1.7 GB when the process starts on a 32 bit version of Windows, but can easily drop to a
few MB when your allocation scheme breaks down the largest space into fragments.
In this case, the only solution is to move to 64 bit, remoting, even over shared memory, is
no solution at all, you can't guarantee to find ~1GB free contigious memory in a 32 bit
process ever.
Note that you can't allocate objects larger than 2GB in .NET, even on 64 bit.

Willy.
Apr 6 '07 #16

P: n/a

"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:Op**************@TK2MSFTNGP05.phx.gbl...
First of all, asp.net does NOT run in the IIS process space, each asp.net
program runs in a process separated from IIS.
Of course; all instances of "IIS" mentioned in this thread are short-form
references the ASP.NET app we are executing.
Second, there is no such 1GB limit for asp.net either,
IIS "self-tunes" itself and avoids taking up more than roughly one quarter
of the available physical RAM, which is far less than standalone process
will allow for. We have seen OutOfMemoryExceptions frequently by stepping
above this threshhold, despite at least a gig of available physical RAM.
This does not work well for a dedicated web host machine.

Of course, we can look at experimenting more with 64-bit machines that have
gobs of RAM and see what a quarter of 8GB (2GB) would do for us, but that's
not the point. The point is I am looking in parallel at IPC and that's what
this thread is for.

Jon

Apr 6 '07 #17

P: n/a
On Fri, 06 Apr 2007 09:17:45 -0700, Jon Davis
<jo*@REMOVE.ME.PLEASE.jondavis.netwrote:
[...]
IIS "self-tunes" itself and avoids taking up more than roughly one
quarter of the available physical RAM, which is far less than standalone
process will allow for.
Can you document the claim that "IIS 'self-tunes' itself and avoids taking
up more than roughly one quarter of the available physical RAM"? I have
never heard anything of the sort, and there's nothing about the
description of the behavior that you've given that suggests it's true.

It is false that there is any relationship between physical RAM and the
maximum allocation that a "standalone process will allow for". Under
Win32, the 2GB virtual address space limit means that even under optimal
conditions no application will ever use more than half of the installed
ram when 4GB is installed, and a typical application will only be able to
allocate some amount much less (and 1GB is not at all an uncommon upper
bound for a typical application).
We have seen OutOfMemoryExceptions frequently by stepping
above this threshhold, despite at least a gig of available physical RAM.
This means nothing. If you are running 32-bit Windows, with 4GB of RAM it
is absolutely normal for an application to not be able to allocate any
more memory even while physical RAM has 1GB or more available. With 4GB
installed, the limit on how much memory an application can allocate isn't
the physical RAM, it's the virtual address space.
This does not work well for a dedicated web host machine.

Of course, we can look at experimenting more with 64-bit machines that
have gobs of RAM and see what a quarter of 8GB (2GB) would do for us,
but that's not the point. The point is I am looking in parallel at IPC
and that's what this thread is for.
Your assumption that under 64-bit Windows you would only be able to
allocate 2GB of memory with 8GB installed is unfounded. Under 64-bit
Windows, even with only 4GB installed, you should be able to allocate as
much memory as you can use, and if that exceeds 4GB then you can be
assured of consuming all physical RAM. There's no arbitrary "1 to 4
ratio" between physical RAM and allocations allowed.

Also, your interest in IPC is predicated on an incorrect understanding of
what's going on with the memory management. Why waste time pursuing a
solution that is assured to do no better than what is already occurring?
You need to better understand the memory management issue you're dealing
with before you start tackling work-arounds to it.

Pete
Apr 6 '07 #18

P: n/a
Can you document the claim that "IIS 'self-tunes' itself

http://www.microsoft.com/technet/pro...b70352baa.mspx
and avoids taking up more than roughly one quarter of the available
physical RAM"?
No, this was an observation of our site.

>We have seen OutOfMemoryExceptions frequently by stepping
above this threshhold, despite at least a gig of available physical RAM.

This means nothing. If you are running 32-bit Windows, with 4GB of RAM it
is absolutely normal for an application to not be able to allocate any
more memory even while physical RAM has 1GB or more available.
That's not true; a standalone app can and will allocate RAM and will begin
using available virtual memory when unreserved physical RAM is no longer
available. IIS is FAR more likely to throw OutOfMemoryExceptions than a
standalone app. I may not be a world-class developer yet but 10 years of
experience with developing both IIS apps and Windows apps gives me enough to
go on.

IIS memory utilization in itself is not the issue here. The issue is we have
a process that we have a large process that we've decided will be
implemented as a Windows service and that will be accessible by multiple web
apps.
Your assumption that under 64-bit Windows you would only be able to
allocate 2GB of memory with 8GB installed is unfounded.
Although self-limitation of the raw memory utilization of IIS isn't
"unfounded", said ratio was stated slightly tongue-in-cheek because the
symptom was observed by us. We did already try using 64-bit Windows with no
significant memory utilization changes for IIS. But since this is still
OT--as I said, the purpose of the thread was to examine IPC--this is the
last I will speak of it. What you don't know and don't need to know is that
there is more going on here than basic IIS limitations.

Jon

Apr 6 '07 #19

P: n/a
On Fri, 06 Apr 2007 10:25:08 -0700, Jon Davis
<jo*@REMOVE.ME.PLEASE.jondavis.netwrote:
>Can you document the claim that "IIS 'self-tunes' itself
and avoids taking up more than roughly one quarter of the available
physical RAM"?
>
http://www.microsoft.com/technet/pro...b70352baa.mspx
There is nothing in that document that suggests IIS limits itself to 1/4
of available RAM. As far as "tuning" more generally goes, you'll note (I
hope) that part of the tuning that goes on is IIS ensuring that it doesn't
starve other processes of memory. Your proposed change to your setup
involves essentially doing just that, perhaps even starving IIS.
>This means nothing. If you are running 32-bit Windows, with 4GB of RAM
it is absolutely normal for an application to not be able to allocate
any
more memory even while physical RAM has 1GB or more available.

That's not true; a standalone app can and will allocate RAM and will
begin using available virtual memory when unreserved physical RAM is no
longer
available.
Wrong. And until you correct your misunderstanding of the memory
management model used by Windows, you are going to continue trying to
solve your problem the wrong way.

Quick summary (applies to 32-bit Windows, same principles apply -- mostly
-- to 64-bit Windows, but of course the actual limits are vastly higher):

* ALL memory allocations in a process go through virtual memory. ALL
of them. A normal application (this would include your service) under
Windows does not get to allocate physical RAM directly. The first part of
any memory allocation involves allocating a chunk of virtual address
space, whether or not there is sufficient physical RAM to satisfy that
allocation.

* The virtual address space for 32-bit pointers is theoretically 4GB,
but because of the way that Windows uses those pointers, only half of this
is available to the process directly (up to 3GB with a special switch
booting the OS). Thus, no process can EVER allocate more than 2GB.
Because of fragmentation, most processes will find themselves limited to
maximum total allocations somewhat less than 2GB. The larger the
individual blocks the process is trying to allocate, the more likely this
will be a problem.

* When a process references a virtual address that hasn't been
"committed", at that point physical RAM is assigned to the virtual
address. But since a process cannot address more than 2GB of virtual
address space, the process can never reference more than 2GB of physical
RAM as well. In reality, a process almost never has their entire virtual
allocation committed, and so physical RAM usage is almost always less (and
usually considerably less) than their total virtual memory allocation.
IIS is FAR more likely to throw OutOfMemoryExceptions than a
standalone app. I may not be a world-class developer yet but 10 years of
experience with developing both IIS apps and Windows apps gives me
enough to go on.
It's apparent from your lack of understanding of the Windows memory
management model that that's not true. You need to accumulate at least a
little more experience before you have "enough to go on".
IIS memory utilization in itself is not the issue here.
Then you have asked your question incorrectly. In all of your posts, how
IIS uses memory is a central part of your complaint. If IIS memory
utilization is not the issue, then I recommend you stop including it in
your queries.
The issue is we have a process that we have a large process that we've
decided will be
implemented as a Windows service and that will be accessible by multiple
web apps.
As near as I can tell, the reason you've decided to implement it as a
Windows service is that you believe you can get around the memory
allocation issues by doing so. The problem is that you can't. The same
memory allocation issues that restrict IIS will also restrict you.

Pete
Apr 6 '07 #20

P: n/a

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Fri, 06 Apr 2007 10:25:08 -0700, Jon Davis
>IIS memory utilization in itself is not the issue here.

Then you have asked your question incorrectly. In all of your posts, how
IIS uses memory is a central part of your complaint.
There was no complaint, but a small portrait of the background which frankly
I shouldn't have mentioned. I was sloppy about describing the background by
pronouncing a few symptomatic inferences, but it was all because I
considered it completely off-topic.
If IIS memory utilization is not the issue, then I recommend you stop ...
You wrote a lot of stuff here, but you don't get it. We have 30 or so
instances of web apps of identical codebases that are co-branded and hosted
on the same server. The memory each of those instances takes up is roughly
identical as they are using the same codebase. There is RAM enough on the
server to support dropping in a single indexing process that takes up a gig
or so. Dropping that process directly into the ASP.NET codebase will not
work, as the server will not support being multiplied by a factor of 30. We
need an isolated Windows service. None of this is any of your business. I
came here for IPC discussion. I will find another newsgroup for memory
optimization.

Jon
Apr 6 '07 #21

P: n/a
On Fri, 06 Apr 2007 11:22:23 -0700, Jon Davis
<jo*@REMOVE.ME.PLEASE.jondavis.netwrote:
There was no complaint, but a small portrait of the background which
frankly I shouldn't have mentioned.
Yes, it's true. If you include extraneous things that are irrelevant to
your actual question, you will find it very difficult to get helpful
answers.
You wrote a lot of stuff here, but you don't get it.
All of what I wrote was intended to help you understand better the memory
management model in use, something you clearly don't currently
understand. If you don't find it helpful, I suppose that's your
prerogative. But make no mistake: you don't understand the memory
management model.
[...] We
need an isolated Windows service. None of this is any of your business.
Not "my business"? Why so hostile? You are the one who brought up the
memory management issues, and all I was trying to do was help you
understand them better so that you don't waste time working on a solution
that doesn't actually solve your problem.

You may also want to review the bromide about not biting the hand that
feeds you.

Pete
Apr 6 '07 #22

P: n/a
Thanks Peter. I am grateful for the link.

I'm still not sure how significant the overhead of marshalling w/ IPC will
be but I am going to go ahead and move forward with IPC for a quick
implementation, then I might come back and refer to your article for MMF for
a possible refactoring and refinement.

But please don't take that as disregard. I'll keep this handy and it's
exactly what I asked for.

Jon
"Peter Bromberg [C# MVP]" <pb*******@yahoo.yabbadabbadoo.comwrote in
message news:92**********************************@microsof t.com...
Jon,
In this article
http://www.eggheadcafe.com/articles/20050116.asp
I did some "proof of concept" work with MMF's using the MetalWrench
Toolbox
assembly, a copy of which is in the bin/debug folder of the associated
download.

It was the only one I tested that held up consistently under heavy load.
Peter

--
Site: http://www.eggheadcafe.com
UnBlog: http://petesbloggerama.blogspot.com
Short urls & more: http://ittyurl.net


"Jon Davis" wrote:
>I see what you're saying, per your original reply. I guess I need to do
some
tests to see what impact MarshalByRef implementation, for remoting, would
have. The problem is the level of effort for MMF, which isn't native to
.NET
(or is it?).

We are performing search queries between the IIS process and the indexing
process, so the client/server model works fine for us (better, actually).
The compromise of ease of implementation w/ MarshalByRef for Ipc vs. the
performance overhead of TCP sockets might be legitimate enough to go with
IPC remoting over named pipes.

But if you can find a sample of MMF that's straightforward, obviously not
necessarily as straightforward as the link I just found and posted but
relatively readable, I'd be most grateful.

I'm really not interested at this time in pursuing the purchase of
third-party middleware. Free and open-source, maybe, but one would think
Microsoft would get this stuff to work right out of the .NET box.

Jon
"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote
in
message news:On**************@TK2MSFTNGP04.phx.gbl...
Jon,

You could use that, but in the end, you have to consider how much
data
you are going to push across this pipe. MMF might be better if you
have
to access that data, but for signalling between the two applications, I
would go with remoting, or, if you can use .NET 3.0 (which is 2.0 with
some additional class libraries) then, you should use WCF.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:%2****************@TK2MSFTNGP05.phx.gbl...

"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:us**************@TK2MSFTNGP04.phx.gbl...
This looks rediculously straightforward.

http://www.developer.com/net/vb/arti...0926_3520891_2

Erm, that's Part 2 of a two-part, rediculously straightforward
article.

http://www.developer.com/net/vb/arti...0926_3520891_1



Apr 6 '07 #23

P: n/a
"Jon Davis" <jo*@REMOVE.ME.PLEASE.jondavis.netwrote in message
news:uo**************@TK2MSFTNGP03.phx.gbl...
>
"Willy Denoyette [MVP]" <wi*************@telenet.bewrote in message
news:Op**************@TK2MSFTNGP05.phx.gbl...
>First of all, asp.net does NOT run in the IIS process space, each asp.net program runs in
a process separated from IIS.

Of course; all instances of "IIS" mentioned in this thread are short-form references the
ASP.NET app we are executing.
>Second, there is no such 1GB limit for asp.net either,

IIS "self-tunes" itself and avoids taking up more than roughly one quarter of the
available physical RAM, which is far less than standalone process will allow for. We have
seen OutOfMemoryExceptions frequently by stepping above this threshhold, despite at least
a gig of available physical RAM. This does not work well for a dedicated web host machine.
Once again, IIS and ASP.NET run in different processes InetInfo.exe and w3wp.exe (one per
application pool) respectively on W2K3.
IIS doesn't control the memory reserved by asp.net worker processes, this is done through
the web.config files or the IIS metabase, depending on the version of IIS you are running.
IIS5.1 uses the Web.Config "ProcessModel" section which contains a "memoryLimit" entry
defining the amount of memory that can be allocated before the process recycles (after
throwing an OOM). IIS6 Worker processes don't use the ProcessModel from the config files,
instead the process model is configured through the "aspnet_isapi.dll", when running IIS6 in
native mode, configuration of the processModel for the worker processes must be done through
the IIS Manager User Interface, or programmatically via WMI.

Of course, we can look at experimenting more with 64-bit machines that have gobs of RAM
and see what a quarter of 8GB (2GB) would do for us, but that's not the point. The point
is I am looking in parallel at IPC and that's what this thread is for.
There is no such thing as a quarter of..., the problem is that description of the
"memoryLimit", which really is the max. amount of per process Virtual Memory to be used,
when running on 32 bit, the process limit is 2GB, with a default is 60% for "memoryLimit",
this amounts to 1.2GB. But be aware that when allocating even much less than 1GB in one
chunk, you can get OOM's because there is no such amount of contiguous space available!
When running on 64bit, the Virtual memory size is 8TB, the "memoryLimit" is no longer
expressed in %, instead you have to specify the amount of memory in KB (same for IIS7).

Willy.

Apr 6 '07 #24

P: n/a

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Fri, 06 Apr 2007 11:22:23 -0700, Jon Davis
<jo*@REMOVE.ME.PLEASE.jondavis.netwrote:
>There was no complaint, but a small portrait of the background which
frankly I shouldn't have mentioned.

Yes, it's true. If you include extraneous things that are irrelevant to
your actual question, you will find it very difficult to get helpful
answers.
On the contrary, I got some incredibly helpful information from one who
answered my question rather than questioned the question.

Jon
Apr 6 '07 #25

P: n/a

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Fri, 06 Apr 2007 11:22:23 -0700, Jon Davis
<jo*@REMOVE.ME.PLEASE.jondavis.netwrote:

All of what I wrote was intended to help you understand better the memory
management model in use, something you clearly don't currently
understand. But make no mistake: you don't understand the memory
management model.
..
Why so hostile?
You're smart. I have no doubt you can figure this one out.

Jon
Apr 6 '07 #26

P: n/a
On Fri, 06 Apr 2007 12:11:13 -0700, Jon Davis
<jo*@REMOVE.ME.PLEASE.jondavis.netwrote:
>Why so hostile?

You're smart. I have no doubt you can figure this one out.
Nope. I honestly can't. Your hostility is bewildering to me, since all I
was trying to do is help you.

I certainly won't make that mistake again.
Apr 6 '07 #27

P: n/a

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Fri, 06 Apr 2007 12:11:13 -0700, Jon Davis
<jo*@REMOVE.ME.PLEASE.jondavis.netwrote:
>All of what I wrote was intended to help you understand better the memory
>>management model in use, something you clearly don't currently
understand. But make no mistake: you don't understand the memory
management model.
..
Why so hostile?

You're smart. I have no doubt you can figure this one out.

Nope. I honestly can't. Your hostility is bewildering to me, since all I
was trying to do is help you.
Shame that I have to spell it out for you.

1) You remain off-topic, and insist on questioning the question, which is
not helpful. I know that you think you are helpful, but you're stepping on
toes where you were not invited.

2) You're incessantly condescending. You beat people down and spell out in
detail that they are ignorant. For a hard-working software professional,
that's essentially calling them stupid. You lack tact. The fact that you
cannot see it is itself reason enough to want to avoid you and be abrasive.

3) You quote me out of context. Cases in point: In my post, the URL to the
IIS memory management matter was with regard to the phrase "self-tunes", but
to the phrase "one quarter of the available RAM" I specifically answered
"No". Yet you quoted me referencing the URL and said that the URL does not
talk about IIS using one quarter of available RAM and used that against my
point. Or, case in point, in this post you snipped out the answer to the
question, "Why so hostile?"; the answer was because "all of what [you] wrote
was intended to ["help"] me understand better the memory management model in
use, something [you] insist [i] don't understand. That sort of "help" was
not asked for. And of course you would think I don't understand it, because
I'm not explaining my understanding of it, because I'm not putting thought
to it right now, because I am *focused* on IPC!!!

Jon
Apr 6 '07 #28

P: n/a
BTW just a word of advice. If you really feel strongly that the details of a
matter is based on an incorrect premise, focus solely on the premise ("that
is incorrect"). Don't dwell on the human being who holds the premise ("you
clearly don't know what you're doing").

Willy Denoyette's replies have been very helpful, btw. He seemed to figure
this one out.

Jon
Apr 6 '07 #29

This discussion thread is closed

Replies have been disabled for this discussion.