By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
431,990 Members | 1,741 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 431,990 IT Pros & Developers. It's quick & easy.

Client-side TCP socket receiving "Address already in use" upon connect

P: n/a
Hi,

The reason is that my application does about 16 connects and data
transfers per second, to the same 16 remote hosts. After approx 200 secs
there are 4000 sockets waiting to be garbage collected by the OS. At
this point is seems that connect loops and starts using the same local
addresses it used 4000 connections ago, resulting in an "Address already
in use" exception.

A possible solution to this would be to keep the connection to each
remote host open, but that would require parsing of the received data
into the data items (which are files by the way) sent by the application.

My question is if there are any other methods of solving this? Maybe a
socket option of some sort...

regards
Sep 2 '06 #1
Share this Question
Share on Google+
11 Replies


P: n/a
Sybren Stuvel wrote:
Tor Erik enlightened us with:
>The reason is that my application does about 16 connects and data
transfers per second, to the same 16 remote hosts. After approx 200
secs there are 4000 sockets waiting to be garbage collected by the
OS.

Which OS are we talking about?
Windows XP
>
>At this point is seems that connect loops and starts using the same
local addresses it used 4000 connections ago, resulting in an
"Address already in use" exception.

After how many seconds does this happen?
200 seconds approx
>
>My question is if there are any other methods of solving this? Maybe
a socket option of some sort...

If I'm correct (please correct me if I'm not), on Linux you can use
'sysctl -w net.ipv4.tcp_fin_timeout=X' to set the time between closing
the socket and releasing it to be reused. You can also check the
SO_REUSEADDR argument to the setsockopt function. Read 'man 7 socket'
for more info.
I've read about SO_REUSEADDR. As far as I understand, this is what
SO_REUSEADDR is for:

1. Allow a listening socket to bind itself to its well-known port even
if previously established connections use it as their local port.
Setting this option should be done between calls to socket and bind, and
hence is only usable for listening sockets, not client sockets like mine.

2. Allow multiple servers on the same host with different ip-adresses to
listen to the same port.

I've tried setting this option, but could not see any notable changes...
>
Sybren
Sep 3 '06 #2

P: n/a
I've read about SO_REUSEADDR. As far as I understand, this is what
SO_REUSEADDR is for:
....
I've tried setting this option, but could not see any notable changes...
I was having a similiar problem as you, where as soon as my program
exited, it would get started up again, but could not bind to the same
address.
So i added the follow straight after I create my server object:
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)

And it worked. Note that my program was running on Linux, so this might
be a windows issue.

Sep 3 '06 #3

P: n/a
Tor Erik wrote:
The reason is that my application does about 16 connects and data
transfers per second, to the same 16 remote hosts. After approx 200 secs
there are 4000 sockets waiting to be garbage collected by the OS.
what does "netstat" say about these sockets ?

</F>

Sep 3 '06 #4

P: n/a
Fredrik Lundh wrote:
Tor Erik wrote:
>The reason is that my application does about 16 connects and data
transfers per second, to the same 16 remote hosts. After approx 200
secs there are 4000 sockets waiting to be garbage collected by the OS.

what does "netstat" say about these sockets ?

</F>
They are in the TIME_WAIT state... The msn library has an article on how
to solve this:

http://msdn.microsoft.com/library/de...1fc8ba06a4.asp

Summing up one could either:

1. Increase the upper range of ephemeral ports that are dynamically
allocated to client TCP/IP socket connections:

set registry key:
KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Service s\Tcpip\Parameters\MaxUserPort
to a new DWORD value... (5000 - 65534)
The default in XP is 3976 -http://support.microsoft.com/kb/Q149532

or

2. Reduce the client TCP/IP socket connection timeout value from the
default value of 240 seconds

set registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\Tcpip\Parameters\TcpTimedWaitDelay
to a new DWORD value (30 - 300)

The TCP RFC (RFC 793) recommends a value of 2*msl(Maximum Segment
Lifetime). The general consensus about the value of msn seems to be 1-2
minutes, depending on the underlying network... (2*2 min = 2*120 sec =
240 sec)
I do not want to alter my registry, so I'm currently testing an idea
where I let the client connect and send its content, appended with my
own "magic" EOF byte-sequence. When the server receives this EOF, it
takes care to close the connection. This should eliminate the problem as
it is the peer closing the connection that enters the TIME_WAIT state...

I will report my experiences...
Sep 3 '06 #5

P: n/a
ke***********@gmail.com wrote:
>>I've read about SO_REUSEADDR. As far as I understand, this is what
SO_REUSEADDR is for:

....
>>I've tried setting this option, but could not see any notable changes...


I was having a similiar problem as you, where as soon as my program
exited, it would get started up again, but could not bind to the same
address.
So i added the follow straight after I create my server object:
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)

And it worked. Note that my program was running on Linux, so this might
be a windows issue.
.... and note also that your program was apprently a server, while the OP
was reporting an error on a client program that presumably asks for an
ephemeral port rather than a specifically-numbered one.

Since there are roughly 64,000 ports, the real question seems to be why
his client runs out after about 4,000.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 3 '06 #6

P: n/a
Tor Erik wrote:
Fredrik Lundh wrote:
>Tor Erik wrote:
>>The reason is that my application does about 16 connects and data
transfers per second, to the same 16 remote hosts. After approx 200
secs there are 4000 sockets waiting to be garbage collected by the OS.

what does "netstat" say about these sockets ?

</F>

They are in the TIME_WAIT state... The msn library has an article on how
to solve this:

http://msdn.microsoft.com/library/de...1fc8ba06a4.asp
Summing up one could either:

1. Increase the upper range of ephemeral ports that are dynamically
allocated to client TCP/IP socket connections:

set registry key:
KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Service s\Tcpip\Parameters\MaxUserPort

to a new DWORD value... (5000 - 65534)
The default in XP is 3976 -http://support.microsoft.com/kb/Q149532

or

2. Reduce the client TCP/IP socket connection timeout value from the
default value of 240 seconds

set registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\Tcpip\Parameters\TcpTimedWaitDelay

to a new DWORD value (30 - 300)

The TCP RFC (RFC 793) recommends a value of 2*msl(Maximum Segment
Lifetime). The general consensus about the value of msn seems to be 1-2
minutes, depending on the underlying network... (2*2 min = 2*120 sec =
240 sec)
I do not want to alter my registry, so I'm currently testing an idea
where I let the client connect and send its content, appended with my
own "magic" EOF byte-sequence. When the server receives this EOF, it
takes care to close the connection. This should eliminate the problem as
it is the peer closing the connection that enters the TIME_WAIT state...

I will report my experiences...
Well... my idea does not work as expected. Even though the server
(remote host) calls socket.close(), it is the client that executes
TIME_WAIT. My guess is that the subtrates below socket closes the
connection at the peer calling connect regardless of where socket.close
is called.

Thoughts anyone?
Sep 3 '06 #7

P: n/a
Tor Erik wrote:
Tor Erik wrote:
>>Fredrik Lundh wrote:
>>>Tor Erik wrote:
The reason is that my application does about 16 connects and data
transfers per second, to the same 16 remote hosts. After approx 200
secs there are 4000 sockets waiting to be garbage collected by the OS.

what does "netstat" say about these sockets ?

</F>
They are in the TIME_WAIT state... The msn library has an article on how
to solve this:

http://msdn.microsoft.com/library/de...1fc8ba06a4.asp
Summing up one could either:

1. Increase the upper range of ephemeral ports that are dynamically
allocated to client TCP/IP socket connections:

set registry key:
KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servi ces\Tcpip\Parameters\MaxUserPort

to a new DWORD value... (5000 - 65534)
The default in XP is 3976 -http://support.microsoft.com/kb/Q149532

or

2. Reduce the client TCP/IP socket connection timeout value from the
default value of 240 seconds

set registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Serv ices\Tcpip\Parameters\TcpTimedWaitDelay

to a new DWORD value (30 - 300)

The TCP RFC (RFC 793) recommends a value of 2*msl(Maximum Segment
Lifetime). The general consensus about the value of msn seems to be 1-2
minutes, depending on the underlying network... (2*2 min = 2*120 sec =
240 sec)
I do not want to alter my registry, so I'm currently testing an idea
where I let the client connect and send its content, appended with my
own "magic" EOF byte-sequence. When the server receives this EOF, it
takes care to close the connection. This should eliminate the problem as
it is the peer closing the connection that enters the TIME_WAIT state...

I will report my experiences...


Well... my idea does not work as expected. Even though the server
(remote host) calls socket.close(), it is the client that executes
TIME_WAIT. My guess is that the subtrates below socket closes the
connection at the peer calling connect regardless of where socket.close
is called.

Thoughts anyone?
Yes, it's the transport layer that puts the socket into the TIME_WAIT state.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 3 '06 #8

P: n/a
Steve Holden <st***@holdenweb.comwrote:
...
>set registry key:
>>KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servi ces\Tcpip\Parameters\M
axUserPort
>
to a new DWORD value... (5000 - 65534)
The default in XP is 3976 -http://support.microsoft.com/kb/Q149532
I wonder why (performance under RAM-costrained conditions? but then why
not have this vary depending on available RAM -- complications?)
Yes, it's the transport layer that puts the socket into the TIME_WAIT state.
Yes, there's a good explanation at
<http://www.developerweb.net/forum/showthread.php?t=2941(though one
should really study Stevens' "TCP-IP Illustrated" for better
understanding in depth). Playing with SO_LINGER and/or the MSL is not
recommended unless you're operating only on a network that you entirely
control (particularly in terms of round-trip times and router behavior).

As debated at
<http://www.sunmanagers.org/pipermail...ry/007068.html
>, you may be able to have your clients go into CLOSE_WAIT (instead of
TIME_WAIT) by playing around with "who closes the socket first", and
CLOSE_WAIT might be more transient than the 2*MSL (240 seconds...)
needed for TIME_WAIT on possibly-unreliable networks. But it's far from
sure that this would gain you much.

Reflecting on the OP's use case, since all connections are forever being
made to the same 16 servers, why not tweak thinks a bit to hold those
connections open for longer periods of time, using a connection for many
send/receive "transactions" instead of opening and closing such
connections all of the time? That might well work better...
Alex
Sep 3 '06 #9

P: n/a
2006/9/3, Alex Martelli <al***@mac.com>:
Reflecting on the OP's use case, since all connections are forever being
made to the same 16 servers, why not tweak thinks a bit to hold those
connections open for longer periods of time, using a connection for many
send/receive "transactions" instead of opening and closing such
connections all of the time? That might well work better...
Connecting to 16 differente servers per second gives a very poor
performance, right? There's some overhead in creating TCP connections,
even on fast networks and computers. Am I right?

--
Felipe.
Sep 3 '06 #10

P: n/a
Felipe Almeida Lessa <fe**********@gmail.comwrote:
2006/9/3, Alex Martelli <al***@mac.com>:
Reflecting on the OP's use case, since all connections are forever being
made to the same 16 servers, why not tweak thinks a bit to hold those
connections open for longer periods of time, using a connection for many
send/receive "transactions" instead of opening and closing such
connections all of the time? That might well work better...

Connecting to 16 differente servers per second gives a very poor
performance, right? There's some overhead in creating TCP connections,
even on fast networks and computers. Am I right?
There is some overhead, yes, but 16 connections per second are few
enough that I wouldn't expect that to be material (on networks with low
latency -- it might be different if you have "long fat pipes", networks
with huge bandwidth and thus "fast" but with high latency -- several
aspects of TCP/IP don't work all that well with those).

For example, try tweaking examples 1 and 2 for chapter 19 of "Python in
a Nutshell" (you can freely download the zipfile w/all examples from
<http://examples.oreilly.com/pythonian/pythonian-examples.zip>) and do
some experiments. I ran the unmodified server on an iBook G4, and on a
Macbook Pro I ran a client that looped 100 times, connecting, sending
'ciao', receiving the response, and printing a timestamp and the
response just received -- with the two laptops, sitting next to each
other, connected via Ethernet (just 100 mbps on the iBook, thus the
gigabit ethernet on the MBP wasn't really being used well;-) through a
direct cable (using ifconfig by hand to make a tiny 192.168.1/24 LAN
there;-). The 100 iterations, all together, took a bit more than half a
second (and, of course, one could try tweaking the asyncore or Twisted
examples from the same chapter to let many connections happen at once,
rather than having them "in sequence" as I just did - and many other
such small but interesting experiments). As a further data point, I
then reran the same experiment after removing the Ethernet cable, so
that the laptops were now connecting through wi-fi (802.11g, 54mbps, the
router being an Airport Express -- everything in the same room, within a
radius of 2 meters, excellent signal reception of course;-): ran in this
way, the 100 iterations took a total of over 2 seconds.
Alex
Sep 3 '06 #11

P: n/a
Felipe Almeida Lessa wrote:
2006/9/3, Alex Martelli <al***@mac.com>:
Reflecting on the OP's use case, since all connections are forever being
made to the same 16 servers, why not tweak thinks a bit to hold those
connections open for longer periods of time, using a connection for many
send/receive "transactions" instead of opening and closing such
connections all of the time? That might well work better...

Connecting to 16 differente servers per second gives a very poor
performance, right? There's some overhead in creating TCP connections,
even on fast networks and computers. Am I right?
I can think of four costs you would invoke by not reusing connections:
1) extra sockets. This is what the OP experienced.
2) startup/teardown bandwidth. More packets need to be sent to start
or end a connection
3) startup latency. It takes some time to create a usable connection.
4) rampup time. TCP/IP doesn't dump everything on the network as soon
as a connection is opened. It "feels it out", giving it more, then a
little more, until it finds the limit. If you're sending multiple
files (small ones especially!) you'll likely hit this.

So yeah, bottom line, it IS faster and more efficient to reuse
connections. If you're doing a protocol for sending files you'll want
to do it.

--
Adam Olsen, aka Rhamphoryncus

Sep 5 '06 #12

This discussion thread is closed

Replies have been disabled for this discussion.