By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,826 Members | 815 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,826 IT Pros & Developers. It's quick & easy.

Network performance

P: n/a
Hi!

I need a fast protocol to use between a client and a server, both
sides written i Python.

What the protocol has to accomplish is extremely simple; the client
sends a number of lines (actually a RDF) and the server accepts or
rejects the packet. That's all !

Now, presently I'm using Twisted and XMLRPC ( why so is a long
history which I will not go into here) and that is a bit to slow for
my needs. On my present setup it takes close to 200 ms to perform one
transfer. But since that includes setting up and tearing down the
connection I thought I'd try doing something simpler so I wrote a
server and a client using socket.

The client sends a number of lines (each ending with \n) and ends one
set of lines with a empty line.
When the client sends a line with only a "." it means "I'm done close
the connection".

Letting the client open the connection and sending a number of sets I
was surprised to find that the performance was equal to what Twisted/
XMLRPC did. Around 200 ms per set, not significantly less.

So what can be done to speed up things or is Python incapable of
supporting fast networking (just a teaser).

-- Roland


Aug 23 '05 #1
Share this Question
Share on Google+
4 Replies


P: n/a
Roland Hedberg wrote:
What the protocol has to accomplish is extremely simple; the client
sends a number of lines (actually a RDF) and the server accepts or
rejects the packet. That's all ! .... Now, presently I'm using ( why so is a long
history which I will not go into here) and that is a bit to slow for
my needs. On my present setup [Twisted and XMLRPC ]it takes close to
200 ms to perform one transfer. But since that includes setting up and
tearing down the connection I thought I'd try doing something simpler
so I wrote a server and a client using socket. I was surprised to find that the performance was equal to what
Twisted/XMLRPC did. Around 200 ms per set, not significantly less.


That should tell you two things:
* Twisted/XMLRPC is as efficient as you can hand craft. (which is a
good use reason for using it).
* That what you're measuring is overhead - and most likely of setup.

I'd measure the ping time between your two hosts. Why? TCP is not the
fastest protocol in the world. If you're measuring the whole
connection, you're dealing with a number of TCP messages:
* Client -> SYN
* Server -> SYN ACK
* Client -> ACK
* Client -> DATA (Assume data fits inside one message)
* Server -> ACK
* Client -> FIN
* Server -> FIN ACK
* Server -> FIN
* Client -> FIN ACK
(Based on Fig 18.3, Steven TCP/IP Ill, Vol 2)

That's 9 round trips. 200ms / 9 == 22.2 ms

*IF* your ping time is close to this, then what you're really measuring
is TCP setup/teardown latency, not python latency. (*IF* you're on a
LAN and the latency is around that I'd suggest that you may have
discovered a local network problem)

If your ping time is significantly lower - eg you're running on
localhost - I'd suggest you translate your code to C (and/or post
your code), and repeat the same tests. So far you haven't shown
that python is slow here, just that something that you're measuring
causes it to be 200ms whether you use Twisted or not. (So you HAVE
shown that twisted isn't slow :-)

Another issue might be how you're measuring the latency - especially
whether if you're including startup of python in the 200ms or not!

Regards,
Michael.
--
Mi************@rd.bbc.co.uk, http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This message (and any attachments) may contain personal views
which are not the views of the BBC unless specifically stated.

Aug 23 '05 #2

P: n/a

23 aug 2005 kl. 10.14 skrev Michael Sparks:
Roland Hedberg wrote:
I was surprised to find that the performance was equal to what
Twisted/XMLRPC did. Around 200 ms per set, not significantly less.

That should tell you two things:
* Twisted/XMLRPC is as efficient as you can hand craft. (which is a
good use reason for using it).


I already gathered that much :-)
* That what you're measuring is overhead - and most likely of
setup.
Not necessarily! If the number of client - server queries/responses
are large enough the effect of the setup time should be negligible.
Or making testes with different numbers of queries you should be able
to deduce the setup time.
I'd measure the ping time between your two hosts.

If your ping time is significantly lower - eg you're running on
localhost - I'd suggest you translate your code to C (and/or post
your code),


I did the tests on localhost!

And I did post the code!

So, I made another test. I used a server I have already written in C
and which I know quite well how fast it is.

Using a python client I've written that talks to this server, it
takes 0.8 s for the python client to start, connect and send
1000 queries. A C client is a bit faster but not a lot.

This is more in the order of what I'd like to have.

Hmm, not surprising this makes me suspect my python server
implementation :-/

-- Roland
Aug 23 '05 #3

P: n/a
Roland Hedberg <ro************@adm.umu.se> wrote:
[ ... ]
The client sends a number of lines (each ending with \n) and ends one
set of lines with a empty line.
When the client sends a line with only a "." it means "I'm done close
the connection".

Letting the client open the connection and sending a number of sets I
was surprised to find that the performance was equal to what Twisted/
XMLRPC did. Around 200 ms per set, not significantly less.

So what can be done to speed up things or is Python incapable of
supporting fast networking (just a teaser).
That 200ms is a dead giveaway (if you've run into this issue before).
If you've got a Linux box to hand, try sticking the server on there
and see what happens.

The key to what is going on is here:
http://www.port80software.com/200ok/...01/31/317.aspx

The easy solutions are to either change:
def send( self, rdf ):
self.s.send( rdf )
self.s.send( "\n" )


to

def send( self, rdf ):
self.s.send( rdf+"\n" )

or to self.s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 0) after
connecting self.s .

What's going on is something like this:

1. Client sends rdf

2. Server receives rdf. But it hasn't received the \n yet, so it doesn't
have a response to send yet. And because of delayed ACK, it won't send
the ACK without the DATA packet containing that response.

3. Meanwhile, the client tries to send \n, but because Nagle is on, it
would make an underful packet, and it hasn't received the ACK from the
last packet yet, so it holds on to the \n.

4. Eventually (after 200ms on Windows and, I believe, BSDs, hence OSX,
but much less on Linux) the Server gives up waiting for a DATA packet
and sends the ACK anyway.

5. Client gets this ACK and sends the \n.

6. At which point the Server has a complete line it can process, and
sends its reponse in a DATA with the ACK to the \n "piggybacked".

So the solutions are either send rdf+"\n" as a single packet in 1, and
the exchange will skip straight to 6, avoiding the 200ms delay; or
disable Nagle so 3 doesn't hold and the Client sends the \n
immediately instead of waiting for the ACK in 5.

--
\S -- si***@chiark.greenend.org.uk -- http://www.chaos.org.uk/~sion/
___ | "Frankly I have no feelings towards penguins one way or the other"
\X/ | -- Arthur C. Clarke
her nu becomež se bera eadward ofdun hlęddre heafdes bęce bump bump bump
Aug 23 '05 #4

P: n/a

23 aug 2005 kl. 15.14 skrev Sion Arrowsmith:
Roland Hedberg <ro************@adm.umu.se> wrote:
The easy solutions are to either change:

def send( self, rdf ):
self.s.send( rdf )
self.s.send( "\n" )

to

def send( self, rdf ):
self.s.send( rdf+"\n" )

or to self.s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 0) after
connecting self.s .


I had just caught this but didn't really know way yet!
What's going on is something like this:


Excellent description !

Thanks ever so much !!

-- Roland
Aug 23 '05 #5

This discussion thread is closed

Replies have been disabled for this discussion.