"RobG" <rg***@iinet.net.auau> wrote in message
news:4I******************@news.optus.net.au...
Anyone who has tried to transfer lots of files over a network knows
it is much, much faster to make one big file, send it, then unpack it
at the other end - ever heard of CPIO or its friend, SCPIO? Is UNIX
really that dead? Or has Linux changed the name to something
presumably more sexy but nonetheless awfully geeky?
Anyone who has downloaded content from a Web site knows it's much faster
to let the browser do GET requests on several images at once then it is
to do a GET, wait for an image to download, do another GET, let that
image download, etc.
By putting all your JavaScript in one large file, it must download
synchronously.
By splitting it into several small files, the browser can perform up to
4 GETs (HTTP 1.0) or 2 GETs (HTTP 1.1) simultaneously.
Anyone who has ever FTPed several large files over the Internet will
have also seen this. For example, when I FTP FreeBSD ISOs, I get
approximately 60-80Kb/s on each of the 4 downloads. As downloads end and
I have one left, that ISO downloads at approximately 100Kb/s, not the
240-320Kb/s you might expect.
When it comes to the Internet, one large pipe is not the same as several
smaller ones.
--
Grant Wagner <gw*****@agricoreunited.com>
comp.lang.javascript FAQ -
http://jibbering.com/faq