473,398 Members | 2,393 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,398 software developers and data experts.

better to serve one big js file or several smaller ones?

Hi,

I'm curious about server load and download time if I use one big
javascript file or break it into several smaller ones. Which is better?
(Please think of this as the first time the scripts are downloaded so
that browser caching is out of the equation.)

Thanks,
Peter

Mar 22 '06 #1
22 3934
pe**********@gmail.com said the following on 3/22/2006 1:45 AM:
Hi,

I'm curious about server load and download time if I use one big
javascript file or break it into several smaller ones. Which is better?
(Please think of this as the first time the scripts are downloaded so
that browser caching is out of the equation.)


That depends on your aim. IE will load them faster using separate files
and using separate files will allow the browser to go ahead and parse
them as they are downloaded.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Mar 22 '06 #2
On Wed, 22 Mar 2006 01:52:42 -0500, Randy Webb
<Hi************@aol.com> wrote:
pe**********@gmail.com said the following on 3/22/2006 1:45 AM:
Hi,

I'm curious about server load and download time if I use one big
javascript file or break it into several smaller ones. Which is better?
(Please think of this as the first time the scripts are downloaded so
that browser caching is out of the equation.)


That depends on your aim. IE will load them faster using separate files
and using separate files will allow the browser to go ahead and parse
them as they are downloaded.


Assuming a good connection, lots of the world doesn't have a good
connection or are using high latency mobile connections (where even if
the transfer speeds are high the overhead of making a request is also
high, so small files take a long time to be requested.

one gzipped large file is your best bet.

Jim.
Mar 22 '06 #3
Jim Ley wrote:
[...] Randy Webb [...] wrote:
pe**********@gmail.com said the following on 3/22/2006 1:45 AM:
I'm curious about server load and download time if I use one big
javascript file or break it into several smaller ones. Which is better?
(Please think of this as the first time the scripts are downloaded so
that browser caching is out of the equation.)

That depends on your aim. IE will load them faster using separate files
and using separate files will allow the browser to go ahead and parse
them as they are downloaded.

Assuming a good connection, lots of the world doesn't have a good
connection or are using high latency mobile connections (where even if
the transfer speeds are high the overhead of making a request is also
high, so small files take a long time to be requested.

one gzipped large file is your best bet.


Not at all.

1. Not every UA supports gzip-compressed responses. Those which do not,
will (hopefully) be served the uncompressed version of that large
resource, with the known drawbacks. If one relies on gzip compression,
inevitably the resources will grow larger than usual, and so will the
loading time considerably if gzip compression is not supported by the
client.

2. Ideally each resource should be less than 1160 bytes, to easily fit
into one TCP/IP packet. It is therefore a Good Thing if resources
are not too large. However, one has to find a healthy balance between
the number of chunks the resource is split into and the size of each
chunk, because too many chunks require too many HTTP requests.

3. Functionality should be splitted into libraries that deal exactly
with a particular feature. That allows for easier maintenance of,
usually overall smaller download size if only a particular feature
from that feature set is needed, and it avoids problems for people
with older editors (that have a size limit).
PointedEars
Mar 22 '06 #4
@sh
> one gzipped large file is your best bet.

Jim.


I presume thats some sort of text compression? Does it make the file
unusable for editing without dezippin?
Mar 22 '06 #5
On Wed, 22 Mar 2006 16:40:44 +0100, Thomas 'PointedEars' Lahn
<Po*********@web.de> wrote:
Jim Ley wrote:
1. Not every UA supports gzip-compressed responses. Those which do not,
will (hopefully) be served the uncompressed version of that large
resource, with the known drawbacks.
Of course, there's no associated drawbacks, the total size of all the
files together or one file is the same, but you've saved all the bytes
in the headers, both request and recieve - gzip is also likely more
efficient on the larger single file (more identical tokens)
If one relies on gzip compression,
inevitably the resources will grow larger than usual,
There's no reason to conclude that.
2. Ideally each resource should be less than 1160 bytes, to easily fit
into one TCP/IP packet.
1160 bytes leaves you about 200 bytes for your js code, less if there
are large cookes etc., that's a pointless amount of code to split your
files into.

3. Functionality should be splitted into libraries that deal exactly
with a particular feature.
I can't agree, delivering multiple javascript files is slow,
particularly on high latency, or slow upload connections - such as
mobile services, you also increase the chance of a single of the many
failing which leads to more unpredictable failures - of course if you
live in the first world and use nothing but broadband connections, you
simply won't comprehend this, but please try and think outside your
own experiences.
That allows for easier maintenance of,
maintenance and what is delivered to the client are seperate issues,
don't confuse them, have a good build environment, much better than
imposing your maintenance constraints on your users experience.
usually overall smaller download size if only a particular feature
from that feature set is needed


You should never be serving redundant code, but that's got nothing to
do with the files you deliver to the client. Stop confusing authoring
practices and consumer practices.

Jim.
Mar 22 '06 #6
On Wed, 22 Mar 2006 15:50:40 -0000, "@sh" <sp**@spam.com> wrote:
one gzipped large file is your best bet.

Jim.


I presume thats some sort of text compression? Does it make the file
unusable for editing without dezippin?


well you normally handle such gzipping transparently in your
webserver.

Jim.
Mar 22 '06 #7
Jim Ley wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
Jim Ley wrote:
1. Not every UA supports gzip-compressed responses. Those which do not,
will (hopefully) be served the uncompressed version of that large
resource, with the known drawbacks.
Of course, there's no associated drawbacks,


There are.
the total size of all the files together or one file is the same,
No, it is not, by definition. In fact, one big library is very likely
to be smaller when gzip-compressed than the concatenation of several
gzip-compressed libraries, because there is greater redundance in it.

However, that does not help you with the fact that gzip-compressed
resources have no ultimate support on the Web nowadays. Content
negotiation (that fails sometimes, but I would disregard that as a
bug of the UA) cannot mitigate the fact that if gzip-compressed
responses are /not/ supported, the resource that is to be downloaded
is considerably greater than with support for such responses.
but you've saved all the bytes in the headers, both request and recieve -
gzip is also likely more efficient on the larger single file (more
identical tokens)
The drawback is that it is larger than the gzipped version, no matter how
many chunks there are. Using gzip compression as an argument that large
resources are OK nowadays simply does not hold water. Unless one wants
to completely disregard users of the mentioned UAs and slower connections.
If one relies on gzip compression,
inevitably the resources will grow larger than usual,


There's no reason to conclude that.


Yes, it is. It is simply human nature. If you rely entirely on something,
without understanding the repercussions of using it, you do not care about
what happens if it is not supported. How some people use, or rather
misuse, client-side scripting is a clear indication of this.
2. Ideally each resource should be less than 1160 bytes, to easily fit
into one TCP/IP packet.


1160 bytes leaves you about 200 bytes for your js code, [...]


How did you get that idea? I said _"ideally"_.
[...]
3. Functionality should be splitted into libraries that deal exactly
with a particular feature.
I can't agree, delivering multiple javascript files is slow,
particularly on high latency, or slow upload connections - such as
mobile services, you also increase the chance of a single of the many
failing which leads to more unpredictable failures - of course if you
live in the first world and use nothing but broadband connections, you
simply won't comprehend this, but please try and think outside your
own experiences.


Think of a third-worlder who simply wants to access vital information, and
you are forcing him to download a, say, 100K script file as-is (because his
UA is old enough not to support gzip-compressed HTTP responses), of which,
say, 1% of that code is really used on the site. Then reconsider what you
just said.
That allows for easier maintenance of,


maintenance and what is delivered to the client are seperate issues,


No.
don't confuse them, have a good build environment, much better than
imposing your maintenance constraints on your users experience.
Users' experience will be greatly tarnished by waiting to download for
code that is mostly not needed.
usually overall smaller download size if only a particular feature
from that feature set is needed


You should never be serving redundant code, but that's got nothing to
do with the files you deliver to the client.


It has. Using one big library for everything is serving loads of
redundant code.
Stop confusing authoring practices and consumer practices.


"Consumer practices"? You must be kidding.
PointedEars
Mar 22 '06 #8
Thomas 'PointedEars' Lahn wrote:

Assuming a good connection, lots of the world doesn't have a good
connection or are using high latency mobile connections (where even if
the transfer speeds are high the overhead of making a request is also
high, so small files take a long time to be requested.

one gzipped large file is your best bet.

Not at all.

2. Ideally each resource should be less than 1160 bytes, to easily fit into one TCP/IP packet. It is therefore a Good Thing if resources
are not too large. However, one has to find a healthy balance between
the number of chunks the resource is split into and the size of each
chunk, because too many chunks require too many HTTP requests.
I can't agree with this. The higher the request to response packet
ratio, the more you suffer from latency issues, a real bugger on a poor
dial up or mobile connection.

A better argument would be to say the ideal size is the buffer size of
the client's TCP/IP stack (which tends to be at least 16K, less on
embedded devices, much more on desktops OSs). The server will send
multiple packets for one request, based on the clients advertised buffer
capacity.
3. Functionality should be splitted into libraries that deal exactly
with a particular feature. That allows for easier maintenance of,
usually overall smaller download size if only a particular feature
from that feature set is needed, and it avoids problems for people
with older editors (that have a size limit).

Agreed, but I don't think the editor argument is relevant these days!
This has nothing to do with optimising download times, more of a 'best
practice'.

--
Ian Collins.
Mar 22 '06 #9
Ian Collins wrote:
Thomas 'PointedEars' Lahn wrote:
Assuming a good connection, lots of the world doesn't have a good
connection or are using high latency mobile connections (where even if
the transfer speeds are high the overhead of making a request is also
high, so small files take a long time to be requested.

one gzipped large file is your best bet.
Not at all.

2. Ideally each resource should be less than 1160 bytes, to easily fit
into one TCP/IP packet. It is therefore a Good Thing if resources
are not too large. However, one has to find a healthy balance between
the number of chunks the resource is split into and the size of each
chunk, because too many chunks require too many HTTP requests.

I can't agree with this. The higher the request to response packet
ratio, the more you suffer from latency issues, a real bugger on a poor
dial up or mobile connection.


(sic!)

Please learn to quote. With Mozilla/5.0, you should disable format-flowed
for posting, then the bug should go away.
A better argument would be to say the ideal size is the buffer size of
the client's TCP/IP stack (which tends to be at least 16K, less on
embedded devices, much more on desktops OSs). The server will send
multiple packets for one request, based on the clients advertised
buffer capacity.
The server will not send more packets than required, even if the TCP/IP
stack buffer size of the client is greater. The argument is void.
3. Functionality should be splitted into libraries that deal exactly
with a particular feature. That allows for easier maintenance of,
usually overall smaller download size if only a particular feature
from that feature set is needed, and it avoids problems for people
with older editors (that have a size limit).

Agreed, but I don't think the editor argument is relevant these days!


You know /all/ users? And it is still a fact that editing a large file
is considerably slower than editing a small file.
This has nothing to do with optimising download times,
Yes, it does. If you split features into one script file each, you will
only have to serve the code for the needed features instead of all the
code you have ever written. The overhead by more requests is negligible
compared to the overhead from a large amount of served, but in the end
unused code.
more of a 'best practice'.


A best common practice that has several grounds.
PointedEars
Mar 22 '06 #10
Thomas 'PointedEars' Lahn said the following on 3/22/2006 12:36 PM:
Jim Ley wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
Jim Ley wrote:
1. Not every UA supports gzip-compressed responses. Those which do not,
will (hopefully) be served the uncompressed version of that large
resource, with the known drawbacks. Of course, there's no associated drawbacks,


There are.


Just as there are drawbacks to the converse. Everything has
drawbacks/negatives.
the total size of all the files together or one file is the same,


No, it is not, by definition. In fact, one big library is very likely
to be smaller when gzip-compressed than the concatenation of several
gzip-compressed libraries, because there is greater redundance in it.


gzipped or not is irrelevant as to whether you should use 10 small files
or 1 large one.
However, that does not help you with the fact that gzip-compressed
resources have no ultimate support on the Web nowadays. Content
negotiation (that fails sometimes, but I would disregard that as a
bug of the UA) cannot mitigate the fact that if gzip-compressed
responses are /not/ supported, the resource that is to be downloaded
is considerably greater than with support for such responses.


You are arguing against gzip, not about whether to use 1 large or
several small files.
but you've saved all the bytes in the headers, both request and recieve -
gzip is also likely more efficient on the larger single file (more
identical tokens)


The drawback is that it is larger than the gzipped version, no matter how
many chunks there are. Using gzip compression as an argument that large
resources are OK nowadays simply does not hold water. Unless one wants
to completely disregard users of the mentioned UAs and slower connections.


Very true.
If one relies on gzip compression,
inevitably the resources will grow larger than usual,

There's no reason to conclude that.


Yes, it is. It is simply human nature. If you rely entirely on something,
without understanding the repercussions of using it, you do not care about
what happens if it is not supported. How some people use, or rather
misuse, client-side scripting is a clear indication of this.


As is the use of over-bloated code by people on broadband. If you argue
that larger files would become predominant with gzip then the same
argument holds for broadband users. And with the popularity of broadband
in the US that is going to become more the norm for websites - they use
broadband and ignore the non-broadband users.
2. Ideally each resource should be less than 1160 bytes, to easily fit
into one TCP/IP packet.

1160 bytes leaves you about 200 bytes for your js code, [...]


How did you get that idea? I said _"ideally"_.


And he showed why it is not the "ideal" file size.
[...]
3. Functionality should be splitted into libraries that deal exactly
with a particular feature.

I can't agree, delivering multiple javascript files is slow,
particularly on high latency, or slow upload connections - such as
mobile services, you also increase the chance of a single of the many
failing which leads to more unpredictable failures - of course if you
live in the first world and use nothing but broadband connections, you
simply won't comprehend this, but please try and think outside your
own experiences.


Think of a third-worlder who simply wants to access vital information, and
you are forcing him to download a, say, 100K script file as-is (because his
UA is old enough not to support gzip-compressed HTTP responses), of which,
say, 1% of that code is really used on the site. Then reconsider what you
just said.


That's ignorance on the author's part, not a flaw in the mechanism used
to implement his/her ignorance. And has nothing to do with gzip.
That allows for easier maintenance of,

maintenance and what is delivered to the client are seperate issues,


No.
don't confuse them, have a good build environment, much better than
imposing your maintenance constraints on your users experience.


Users' experience will be greatly tarnished by waiting to download for
code that is mostly not needed.


Again, that is ignorance on the author's part. If the page needs 100K of
script to be operational, are you better off with 1 100K file or 10
files that are 10K? And the answer depends on what the code is and there
is no blanket "Do this" answer.
usually overall smaller download size if only a particular feature
from that feature set is needed

You should never be serving redundant code, but that's got nothing to
do with the files you deliver to the client.


It has. Using one big library for everything is serving loads of
redundant code.


Only if the code is redundant.

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Mar 22 '06 #11
JRS: In article <11*********************@u72g2000cwu.googlegroups. com>,
dated Tue, 21 Mar 2006 22:45:15 remote, seen in
news:comp.lang.javascript, pe**********@gmail.com posted :

I'm curious about server load and download time if I use one big
javascript file or break it into several smaller ones. Which is better?
(Please think of this as the first time the scripts are downloaded so
that browser caching is out of the equation.)


There can be no safe, general answer, because it all depends on the type
of usage.

If you have 100 pages and 100 routines each used on only one page, and
if the nature of the pages is such that visitors are expected to go to
one page and then leave the site for long enough for cache expiry, then
there is clearly no point in delivering all 100 routines in one file.

OTOH, if the pages fetched by the anticipated visitors in one session
will between them use all the routines available, then all will have to
be delivered.

Faster is not necessarily better. If, as an author, you can make your
user saturate his link with your stuff, your page will load fastest.
But then the user cannot do anything else at the same time; he may wish
to be running, at the same time, a low-bandwidth server-CPU-linited
process.
Consider a target audience of many on-campus students in affluent first-
world universities : links will be high-bandwidth, and non-browser
cacheing may be active. How consider a target audience of third-world
fishermen : low-bandwidth, less effective cacheing. Clearly, it's
likely that the optimum solution will be different.
If you need to know the optimum, then the best you can do is to make the
tests, with typical data and users.

--
© John Stockton, Surrey, UK. ?@merlyn.demon.co.uk Turnpike v4.00 MIME. ©
Web <URL:http://www.merlyn.demon.co.uk/> - FAQish topics, acronyms, & links.
Proper <= 4-line sig. separator as above, a line exactly "-- " (SonOfRFC1036)
Do not Mail News to me. Before a reply, quote with ">" or "> " (SonOfRFC1036)
Mar 22 '06 #12
Thomas 'PointedEars' Lahn wrote:
Ian Collins wrote:

Thomas 'PointedEars' Lahn wrote:
Assuming a good connection, lots of the world doesn't have a good
connection or are using high latency mobile connections (where even if
the transfer speeds are high the overhead of making a request is also
high, so small files take a long time to be requested.

one gzipped large file is your best bet.

Not at all.

2. Ideally each resource should be less than 1160 bytes, to easily fit
into one TCP/IP packet. It is therefore a Good Thing if resources
are not too large. However, one has to find a healthy balance between
the number of chunks the resource is split into and the size of each
chunk, because too many chunks require too many HTTP requests.


I can't agree with this. The higher the request to response packet
ratio, the more you suffer from latency issues, a real bugger on a poor
dial up or mobile connection.

(sic!)

Please learn to quote. With Mozilla/5.0, you should disable format-flowed
for posting, then the bug should go away.

I'll change the way I post when you get a proper sig.
A better argument would be to say the ideal size is the buffer size of
the client's TCP/IP stack (which tends to be at least 16K, less on
embedded devices, much more on desktops OSs). The server will send
multiple packets for one request, based on the clients advertised
buffer capacity.

The server will not send more packets than required, even if the TCP/IP
stack buffer size of the client is greater. The argument is void.

No, read what I said again, the server will send several packets (if
required) for one TCP request if there is space in the client's receive
buffer. If a 4K page is requested, you will see three response packets
before the server waits for their ACKs. It's called windowing.

Read before declaring things void.

--
Ian Collins.
Mar 23 '06 #13
Ian Collins wrote:
Thomas 'PointedEars' Lahn wrote:
Ian Collins wrote:
Thomas 'PointedEars' Lahn wrote:

[...]
Please learn to quote. With Mozilla/5.0, you should disable
format-flowed for posting, then the bug should go away.


I'll change the way I post when you get a proper sig.


I have a proper sig, for I have _no_ sig most of the time. There are two
signatures: the name, and the signature part (which is delimited with "--
"). Neither is the name required to be in the signature part, nor is the
signature part a requirement at all. In fact, you will see this seldom in
Usenet.

Proper quoting, however, is more of a necessity, because few people will
take the time to decipher unreadable discussions, so few people will read
your postings. Proper quoting is therefore recommended by Netiquette and
this newsgroup's FAQ Notes, et al.
A better argument would be to say the ideal size is the buffer size of
the client's TCP/IP stack (which tends to be at least 16K, less on ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ embedded devices, much more on desktops OSs). The server will send ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ multiple packets for one request, based on the clients advertised
buffer capacity.


The server will not send more packets than required, even if the TCP/IP
stack buffer size of the client is greater. The argument is void.


No, read what I said again, the server will send several packets (if
required) for one TCP request if there is space in the client's receive
buffer. If a 4K page is requested, you will see three response packets
before the server waits for their ACKs. It's called windowing.

Read before declaring things void.


But in contrast to the standardized size of an TCP/IP packet (which I
considered more a side comment than a strong argument in favor or against),
you cannot know how big the actual client's TCP/IP stack buffer is. You
say that yourself.

So your argument, if it even qualifies as such, although it provides some
interesting information, /is/ void, or let us say irrelevant, regarding
the question how large resources should be, and so regarding the question
discussed, whether it is better to split a large script resource or not.
PointedEars
Mar 23 '06 #14
Thomas 'PointedEars' Lahn wrote:
Ian Collins wrote:

Proper quoting, however, is more of a necessity, because few people will
take the time to decipher unreadable discussions, so few people will read
your postings. Proper quoting is therefore recommended by Netiquette and
this newsgroup's FAQ Notes, et al.
I hadn't noticed the long line, in fact you are the first person to
mention it after many years of posting. Mozilla is set to wrap at 72
characters, I'll check into why what I see isn't what appears to be sent.

I you weren't so rude, people might not react the way they do to you
'hints'.

But in contrast to the standardized size of an TCP/IP packet (which I
considered more a side comment than a strong argument in favor or against),
you cannot know how big the actual client's TCP/IP stack buffer is. You
say that yourself.
True, but I've never seen a web enabled device with a buffer restricted
to one packet.
So your argument, if it even qualifies as such, although it provides some
interesting information, /is/ void, or let us say irrelevant, regarding
the question how large resources should be, and so regarding the question
discussed, whether it is better to split a large script resource or not.

Well that all depends on what you'd call 'large'. I'd say try and keep
them to under 10K.

--
Ian Collins.
Mar 23 '06 #15
On Wed, 22 Mar 2006 18:36:01 +0100, Thomas 'PointedEars' Lahn
<Po*********@web.de> wrote:
Jim Ley wrote:
the total size of all the files together or one file is the same,
No, it is not, by definition.


Of course it is 10 1000 byte files and 1 10000 byte file are the same
total size.
but you've saved all the bytes in the headers, both request and recieve -
gzip is also likely more efficient on the larger single file (more
identical tokens)


The drawback is that it is larger than the gzipped version, no matter how
many chunks there are.


What are you discussing, the question was do we have 10 1k files or 1
10k file, that's the topic at hand, and it's clear that 1 10k file is
better, especially if the client supports gzip, but even when it
doesn't also particularly on a slow uplink or intermintent
connection.
2. Ideally each resource should be less than 1160 bytes, to easily fit
into one TCP/IP packet.


1160 bytes leaves you about 200 bytes for your js code, [...]


How did you get that idea? I said _"ideally"_.


Yes, but pretty irrelevant given the fact that so few scripts meet
your ideal.
Think of a third-worlder who simply wants to access vital information, and
you are forcing him to download a, say, 100K script file as-is (because his
UA is old enough not to support gzip-compressed HTTP responses), of which,
say, 1% of that code is really used on the site. Then reconsider what you
just said.


I would never suggest sending anyone 100k script file, but I would
certainly recommend it to anyone suggesting they send 10 10k script
files. You seem confused by what the question was. I certainly don't
recommend libraries.
That allows for easier maintenance of,


maintenance and what is delivered to the client are seperate issues,


No.


Of course they are, if they're not, you should improve your build
techniques.
You should never be serving redundant code, but that's got nothing to
do with the files you deliver to the client.


It has. Using one big library for everything is serving loads of
redundant code.


This was not a question about libraries, stop pretending it was, it
was about is it better to send 10 10k files or 1 100k file, you
answered wrongly, and now you're trying to pretend it was a question
about libraries, where few here are going to disagree with you.

Jim.
Mar 23 '06 #16
Jim Ley wrote:
[...] Thomas 'PointedEars' Lahn [...] wrote:
Jim Ley wrote:
the total size of all the files together or one file is the same, No, it is not, by definition.


Of course it is 10 1000 byte files and 1 10000 byte file are the same
total size.


Will you please read more carefully? As I have already said, what you are
ignoring is that gzip compression is done per delivered resource, not per
connection. One 10000 byte (uncompressed) large resource can usually be
compressed better than 10 resources of a tenth of that size (uncompressed)
each because the large file contains more redundancy. That more redundancy
in the data allows for better compression of the data is a rule of thumb
about data compression.

I have also said that this fact (which worked in your *favor* -- did you
not recognize even that?) does not matter here because gzip-compressed
responses have no ultimate support on today's Web.
but you've saved all the bytes in the headers, both request and recieve
- gzip is also likely more efficient on the larger single file (more
identical tokens)

The drawback is that it is larger than the gzipped version, no matter how
many chunks there are.


What are you discussing, the question was do we have 10 1k files or 1
10k file, that's the topic at hand, [...]


And you have been arguing that because gzip compression exists, it does
not matter if it is one large file. Which is utterly wrong, as I have
pointed out. Do you have anything to say about this, or are you going
to do only another red herring?
2. Ideally each resource should be less than 1160 bytes, to easily fit
into one TCP/IP packet.
1160 bytes leaves you about 200 bytes for your js code, [...]

How did you get that idea? I said _"ideally"_.


Yes, but pretty irrelevant given the fact that so few scripts meet
your ideal.


What you are ignoring is that I did not say the total amount of data
downloaded for a document and included resources should be less then
1160 bytes. I was referring to the resource, and there are several
scripts out there that would meet this standard, including your some
of your own (e.g. your xmlhttp.js is 510 bytes large, _uncompressed_).
However, as I already pointed out, that was only an /ideal/ mentioned
as a hook for the actual argument that resources should be small.
Think of a third-worlder who simply wants to access vital information,
and you are forcing him to download a, say, 100K script file as-is
(because this UA is old enough not to support gzip-compressed HTTP
responses), of which, say, 1% of that code is really used on the site.
Then reconsider what you just said.


I would never suggest sending anyone 100k script file, but I would
certainly recommend it to anyone suggesting they send 10 10k script
files.


But, as you are kindly keep reminding me, the question was: One file or
many? At first you were recommending one file, and I have pointed out
that since it is likely that one file will grow large, it is not viable,
despite the possibility of transparent gzip compression. Now you are
saying you recommend many files. ISTM you are not really clear with
yourself which position your are going to take, and to argument for or
against.
You seem confused by what the question was.
/You/ really seem confused by what the question was.
I certainly don't recommend libraries.


That is an entirely different question. I wonder what you regard as
a library, though. Every script file could qualify as a library.
Especially the type of script file you argued for before would.
That allows for easier maintenance of,
maintenance and what is delivered to the client are seperate issues,

No.


Of course they are, if they're not, you should improve your build
techniques.


In this case they are not, because the design decision has direct impact
to what is delivered to the client. Certainly you are not recommending
to create a library (or a script file, if you wish), and then not use it
as is (compacting code, for example by removing documentation comments,
considered already), are you?
You should never be serving redundant code, but that's got nothing to
do with the files you deliver to the client.

It has. Using one big library for everything is serving loads of
redundant code.


This was not a question about libraries, [...]


The question was whether one should use one (big) script file or
several (smaller) ones.

I still wonder what you would call a library, however that does not matter
regarding the question discussed. What matters is the size, hence I wrote
"one *big* library". I could have also wrote "one *big* script resource",
the argument would still the same, and still be valid.

ISTM you are being deliberately obtuse, and are misunderstanding me
deliberately, because you know that your gzip argument does not hold
water. Probably by acting this way you think you can make anyone
believe that you do not have to defend it against the arguments being
brought up.
PointedEars
Mar 23 '06 #17
Ian Collins wrote:
Thomas 'PointedEars' Lahn wrote:
Ian Collins wrote:

Proper quoting, however, is more of a necessity, because few people will
take the time to decipher unreadable discussions, so few people will read
your postings. Proper quoting is therefore recommended by Netiquette and
this newsgroup's FAQ Notes, et al.

I hadn't noticed the long line, in fact you are the first person to
mention it after many years of posting. Mozilla is set to wrap at 72
characters, I'll check into why what I see isn't what appears to be sent.

I you weren't so rude, people might not react the way they do to you
'hints'.


Rude? My original words were:

| Please learn to quote. With Mozilla/5.0, you should disable format-flowed
| for posting, then the bug should go away.

Your reaction to that kind request and advice was:

| [full quote again]
| I'll change the way I post when you get a proper sig.

And then I explained to you in great detail that and why you were wrong.
Now, who of us was rude?

I am not referring to your other "arguments", because your entire "argument"
is only a red herring (ignoratio elenchi[1]). Please troll elsewhere.
Score adjusted

PointedEars
___________
[1] <URL:http://en.wikipedia.org/wiki/Ignoratio_elenchi>
Mar 23 '06 #18
Thomas 'PointedEars' Lahn wrote:

Rude? My original words were:

| Please learn to quote. With Mozilla/5.0, you should disable format-flowed
| for posting, then the bug should go away.

Your reaction to that kind request and advice was:

| [full quote again]
| I'll change the way I post when you get a proper sig.

And then I explained to you in great detail that and why you were wrong.
Now, who of us was rude?
Telling a long term Usenet user to learn to quote is a bit rude.
I am not referring to your other "arguments", because your entire "argument"
is only a red herring (ignoratio elenchi[1]). Please troll elsewhere.
Well that's another first, I've never trolled or been called a troll in
a decade of Usenet use. I was simply explaining that request latency
can be more of an issue than transfer time, which is a valid assertion.

Considering we have replied to each other's posts in the past, I find it
strange you should knock my posting style now.
Score adjusted

?

--
Ian Collins.
Mar 23 '06 #19
Ian Collins said the following on 3/23/2006 12:40 AM:
Thomas 'PointedEars' Lahn wrote:


<snip>
I am not referring to your other "arguments", because your entire
"argument"
is only a red herring (ignoratio elenchi[1]). Please troll elsewhere.

Well that's another first, I've never trolled or been called a troll in
a decade of Usenet use.


Welcome to the "Thomas called me a Troll" Club. You are member # 1232132
to date. Every time he calls you a Troll, you get a vote. I have around
16 million votes to date.

Welcome to the club and enjoy the benefits of membership :)

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Mar 23 '06 #20
On Thu, 23 Mar 2006 05:48:44 +0100, Thomas 'PointedEars' Lahn
<Po*********@web.de> wrote:
Jim Ley wrote:
I have also said that this fact (which worked in your *favor* -- did you
not recognize even that?)
It was mentioned in my first post in the thread, so it's not
particularly relevant.
What are you discussing, the question was do we have 10 1k files or 1
10k file, that's the topic at hand, [...]


And you have been arguing that because gzip compression exists, it does
not matter if it is one large file.


No I've not, I've said because gzip exists, it adds even more
advantages to the single file.
What you are ignoring is that I did not say the total amount of data
downloaded for a document and included resources should be less then
1160 bytes. I was referring to the resource, and there are several
scripts out there that would meet this standard, including your some
of your own (e.g. your xmlhttp.js is 510 bytes large, _uncompressed_).
xmlhttp.js is not a file I'd use in a commercial environment (it
doesn't do anything), it's specifically designed as an education tool.
It's also completely irrelevant to a discussion about if it's one file
or many.
But, as you are kindly keep reminding me, the question was: One file or
many? At first you were recommending one file, and I have pointed out
that since it is likely that one file will grow large, it is not viable,
One file will grow no larger than many files, if you're incompetent
enough to let your pages grow indiscrimately, you'll do it regardless.
I still wonder what you would call a library,


any code that is specifically designed to be re-used across pages and
situations rather than doing the minimum required to solve the problem
relevant to the page.

Jim.
Mar 23 '06 #21
Randy Webb wrote:
Ian Collins said the following on 3/23/2006 12:40 AM:
Thomas 'PointedEars' Lahn wrote:

<snip>
I am not referring to your other "arguments", because your entire
"argument"
is only a red herring (ignoratio elenchi[1]). Please troll elsewhere.

Well that's another first, I've never trolled or been called a troll in
a decade of Usenet use.

Welcome to the "Thomas called me a Troll" Club. You are member # 1232132
to date. Every time he calls you a Troll, you get a vote. I have around
16 million votes to date.

Welcome to the club and enjoy the benefits of membership :)

Do I get a T-shirt?

--
Ian Collins.
Mar 23 '06 #22
Ian Collins said the following on 3/23/2006 2:09 PM:
Randy Webb wrote:
Ian Collins said the following on 3/23/2006 12:40 AM:
Thomas 'PointedEars' Lahn wrote:


<snip>
I am not referring to your other "arguments", because your entire
"argument"
is only a red herring (ignoratio elenchi[1]). Please troll elsewhere.

Well that's another first, I've never trolled or been called a troll in
a decade of Usenet use.


Welcome to the "Thomas called me a Troll" Club. You are member # 1232132
to date. Every time he calls you a Troll, you get a vote. I have around
16 million votes to date.

Welcome to the club and enjoy the benefits of membership :)

Do I get a T-shirt?


Yeah but you gotta get 100 votes first :)

--
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
Javascript Best Practices - http://www.JavascriptToolbox.com/bestpractices/
Mar 24 '06 #23

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
by: George Adams | last post by:
I like the idea of compiling DSO modules for Apache. It allows me to turn on or off things we may or may not need at a given time (like mod_ssl, mod_auth_mysql, mod_auth_ldap, etc.) and also...
10
by: Steve | last post by:
Hi all i am just starting to get back into VB and i need a little help. I am writing a program that asks a user to type in a set of numbers/letters (in this case shipping containers). Once the...
8
by: Bri | last post by:
Greetings, After making various edits and deletes on aproximately 40,000 records in one table (on the Design Master) syncronization fails with Error 3052 - File Sharing Lock Count Exceeded....
15
by: Yogi_Bear_79 | last post by:
Visual Studio .NET started complaing when the array was around 4000. I found that if I pasted the array in via notepad then opened Visual Studio it would work. Now my array is over 26,000 and...
170
by: I_AM_DON_AND_YOU? | last post by:
Whether we can upload the projects (in .zip format) in these newsgroups? I am asking this because earlier there are more than 50 posts (in one thread) about this query and they are contradicting...
6
by: Vlado Jasovic | last post by:
Hello, We're developing application in VS2005 using vb.net. Our application exe file is ~20mb when compiled. Recently, we have developed auto-update feature that goes on our web-site,...
2
bugboy
by: bugboy | last post by:
Does the total number of rows in a table determine the amount of resources required for a query?.. or is it primarily determined by the number of rows used by the query? ..Does an INDEX mean it...
2
by: =?Utf-8?B?TWFyYyBBbGxhcmQ=?= | last post by:
Hello, I have posted this message in the normal newsgroup instead of posting it in the managed ones. I have a project in VB6 (GDI+) that will read a WMF File (created by my customer) and then...
84
by: Patient Guy | last post by:
Which is the better approach in working with Javascript? 1. Server side processing: Web server gets form input, runs it into the Javascript module, and PHP collects the output for document prep....
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.