fx@testard-vaillant.com (Fran?ois-Xavier Testard-Vaillant) wrote in message news:<b2************************@posting.google.co m>...
Have you tried LWP::Paralell?
As far as I understand LWP::Paralell it allows me to paralellize
several HTTP requests, i.e. not waiting for the end of the first one
to send the second one and so on.
What I need is to send a bunch of requests and leave, not waiting for
the results and not killing them.
The only way I found to do this is to fork the current process and
kill the father. But due to what I want to do (propagation in a
graph), it could create a lot of processes.
fxtv
- use a very short timeout?/tweak LWP to have a 0 timeout
or
use a typical LWP::UserAgent request, but disable the read
from the connected socket after the http request was
transmitted in LWP/Protocol/http.pm @ (yep, need to change the LWP
code, its a shame!)
# read response line from server
LWP::Debug::debug('reading response');
I guess a statement like return HTTP::Response->new()
might work,right after the debug print, but you need
to sort it out for yourself.
Don't know what the effects are perhaps all kind of tcp/ip
buffers will overflow (get flusshed if you die).
getting the request data back would require splitting the
sender and reciever parts into two threads (1 fork), or another
trick would be to start reading after the 2nd request, getting
the results back from the 1st request (or even wait N requests
before starting to read to circumvent tcp/ip buffer overlflows).
I guess I'm redoing LWP::Paralell (in a singletrheaded model)
(or is it the same, duno, never seen it,used it,sorry)
b2w there are easier ways to dump lots data in tcp/ip ports like 80
, see perldoc -f socket and send.