473,386 Members | 1,738 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Jargons of Info Tech industry

Jargons of Info Tech industry

(A Love of Jargons)

Xah Lee, 2002 Feb

People in the computing field like to spur the use of spurious jargons.
The less educated they are, the more they like extraneous jargons, such
as in the Unix & Perl community. Unlike mathematicians, where in
mathematics there are no fewer jargons but each and every one are
absolutely necessary. For example, polytope, manifold,
injection/bijection/surjection, group/ring/field.., homological,
projective, pencil, bundle, lattice, affine, topology, isomorphism,
isometry, homeomorphism, aleph-0, fractal, supremum/infimum, simplex,
matrix, quaternions, derivative/integral, ... and so on. Each and every
one of these captures a concept, for which practical and theoretical
considerations made the terms a necessity. Often there are synonyms for
them because of historical developments, but never “jargons for
jargon's sake” because mathematicians hate bloats and irrelevance.

The jargon-soaked stupidity in computing field can be grouped into
classes. First of all, there are jargons for marketing purposes. Thus
you have Mac OS “X”, Windows “XP”, Sun OS to Solaris and the
versioning confusion of 4.x to 7 to 8 and also the so called
“Platform” instead of OS. One flagrant example is Sun Microsystem's
Java stuff. Oak, Java, JDK, JSDK, J2EE, J2SE enterprise edition or no,
from java 1.x to 1.2 == Java 2 now 1.3, JavaOne, JFC, Jini, JavaBeans,
entity Beans, Awk, Swing... fucking stupid Java and fuck Sun
Microsystems. This is just one example of Jargon hodgepodge of one
single commercial entity. Marketing jargons cannot be avoided in modern
society. They abound outside computing field too. The Jargons of
marketing came from business practice, and they can be excusable
because they are kinda a necessity or can be considered as a naturally
evolved strategy for attracting attention in a laissez-faire economy
system.

The other class of jargon stupidity is from computing practitioners, of
which the Unix/Perl community is exemplary. For example, the name Unix
& Perl themselves are good examples of buzzing jargons. Unix is
supposed to be opposed of Multics and hints on the offensive and
tasteless term eunuchs. PERL is cooked up to be “Practical Extraction
& Reporting Language” and for the precise marketing drama of being
also “Pathologically Eclectic Rubbish Lister”. These types of
jargons exudes juvenile humor. Cheesiness and low-taste is their
hall-mark. If you are familiar with unixism and perl programing, you'll
find tons and tons of such jargons embraced and verbalized by unix &
perl lovers. e.g. grep, glob, shell, pipe, man, regex, more, less,
tarball, shebang, Schwartzian Transform, croak, bless, interpolation,
TIMTOWTDI, DWIM, RFC, RTFM, I-ANAL, YMMV and so on.

There is another class of jargon moronicity, which i find them most
damaging to society, are jargons or spurious and vague terms used and
brandished about by programers that we see and hear daily among design
meetings, online tech group postings, or even in lots of computing
textbooks or tutorials. I think the reason for these, is that these
massive body of average programers usually don't have much knowledge of
significant mathematics, yet they are capable of technical thinking
that is not too abstract, thus you ends up with these people defining
or hatching terms a-dime-a-dozen that's vague, context dependent,
vacuous, and their commonality are often a result of sopho-morons
trying to sound big.

Here are some examples of the terms in question:

• anonymous functions or lambda or lamba function
• closure
• exceptions (as in Java)
• list, array, vector, aggregate
• hash (or hash table) ← fantastically stupid
• rehash (as in csh or tcsh)
• regular expression (as in regex, grep, egrep, fgrep)
• name space (as in Scheme vs Common Lisp debates)
• depth first/breadth first (as in tree traversing.)
• operator
• operator overloading
• polymorphism
• inheritance
• first class objects
• pointers, references
• tail recursion

My time is limited, so i'll just give a brief explanation of my thesis
on selective few of these examples among the umpteen.

In a branch of math called lambda calculus, in which much theories of
computation are based on, is the origin of the jargon _lambda function_
that is so frequently reciprocated by advanced programering donkeys. In
practice, a subroutine without side-effects is supposed to be what
“lambda function” means. Functional languages often can define them
without assigning them to some variable (name), therefore the
“function without side-effects” are also called “anonymous
functions”. One can see that these are two distinct concepts. If
mathematicians are designing computer languages, they would probably
just called such thing _pure functions_. The term conveys the meaning,
without the “lamba” abstruseness. (in fact, the mathematics
oriented language Mathematica refers to lambda function as pure
function, with the keyword Function.) Because most programers are
sopho-morons who are less capable of clear thinking but nevertheless
possess human vanity, we can see that they have not adopted the clear
and fitting term, but instead you see lambda function this and that
obfuscations dropping from their mouths constantly.

Now the term “closure” can and indeed have meant several things in
the computing field. The most common is for it to mean a subroutine
that holds some memory but without some disadvantages of modifying a
global variable. Usually such is a feature of a programing language.
When taken to extreme, we have the what's called Object Oriented
Programing methodology and languages. The other meaning of
“closure” i have seen in text books, is for it to indicate that the
things in the language is “closed” under the operations of the
language. For example, for some languages you can apply operations or
subroutines to any thing in the language. (These languages are often
what's called “dynamic typing” or “typeless”). However, in
other languages, things have types and cannot be passed around
subroutines or operators arbitrarily. One can see that the term
“closure” is quite vague in conveying its meaning. The term
nevertheless is very popular among talkative programers and dense
tutorials, precisely because it is vague and mysterious. These
pseudo-wit living zombies, never thought for a moment that they are
using a moronic term, mostly because they never clearly understand the
concepts behind the term among the contexts. One can particular see
this exhibition among Perl programers. (for an example of the
fantastically stupid write-up on closure by the Perl folks, see
“perldoc perlfaq7” and “perldoc perlref”.)

in the so-called “high-level” computing languages, there are often
data types that's some kind of a collection. The most illustrative is
LISt Processing language's lists. Essentially, the essential concept is
that the language can treat a collection of things as if it's a single
entity. As computer languages evolve, such collection entity feature
also diversified, from syntax to semantics to implementation. Thus,
beside lists, there are also terms like vector, array, matrix, tree,
hash/“hash table”/dictionary. Often each particular term isto
convey a particular implementation of collection so that it has certain
properties to facilitate specialized uses of such groupy. The Java
language has such groupy that can illustrate the point well. In Java,
there are these hierarchy of collection-type of things:

Collection
Set (AbstractSet, HashSet)
SortedSet (TreeSet)
List (AbstractList, LinkedList, Vector, ArrayList)

Map (AbstractMap, HashMap, Hashtable)
SortedMap (TreeMap)

The words without parenthesis are Java Interfaces, and ones in are
implementations. The interface hold a concept. The deeper the level,
the more specific or specialized. The implementation carry out
concepts. Different implementation gives different algorithmic
properties. Essentially, these hierarchies of Java show the potential
complexity and confusion around groupy entities in computer languages.
Now, among the programers we see daily, who never really thought out of
these things, will attach their own specific meaning to
list/array/vector/matrix/etc type of jargons in driveling and
arguments, oblivious to any thought of formalizing what the fuck they
are really talking about. (one may think from the above tree-diagram
that Java the language has at least put clear distinction to interface
and implementation, whereas in my opinion they are one fantastic fuck
up too, in many respects.)

---------------------
This post is archived at
http://xahlee.org/UnixResource_dir/writ/jargons.html
© Copyright 2002 by Xah Lee.

Xah
xa*@xahlee.org
http://xahlee.org/

Aug 12 '05
385 16833
John Bokma <jo**@castleamber.com> writes:
us****@isbd.co.uk wrote:

They don't have killfiles or scoring


You can install a mod to kill people.


Gee, didn't know that it's that powerful. One more reason not to use web
forums :-)

Dragan

--
Dragan Cvetkovic,

To be or not to be is true. G. Boole No it isn't. L. E. J. Brouwer

!!! Sender/From address is bogus. Use reply-to one !!!
Aug 26 '05 #101
us****@isbd.co.uk wrote:
.... snip ...
Same applies to most newsfeeds, depending on retention. If you
want to look a long way back in a thread, use Google Groups.


Except for those anti-social zealots who use an X-noarchive header.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
Aug 26 '05 #102
John Bokma wrote:
Ulrich Hobelmann <u.*********@web.de> wrote:
John Bokma wrote:
http://www.phpbb.com/mods/


Great. How can I, the user, choose, how to use a mod on a given web
server?


Ask the admin?


And that is, in your opinion, completely comparable to running your own,
private client? Is the admin obliged to install the mod? Is the admin
even reachable?
What if the web server runs another board than PHPBB?


Check if there is a mod, and ask the admin.


See above.
Does the user want this? And with a user stylesheet you can change it
quite radically :-)


The look, not the feel.


Wild guess: (signed) javascript and iframes? on your local computer?

Otherwise: fetch HTML, parse it, restructure it, and have the
application run a local webserver. Python, Perl, piece of cake.


You seem to be forgetting that we are mainly talking about end users
here who most probably will not have the sufficient expertise to do all
that. And even if they do, it's still time consuming.
Aug 26 '05 #103
John Bokma wrote:

so use Lynx :-)

One forum I visit is about scorpions. And really, it talks a bit easier
about scorpions if you have an image to look at :-D.

In short: Usenet = Usenet, and www = www. Why some people want to move
people from www to Usenet or vice versa is beyond me. If 80% of the current
Usenet users stop posting, Usenet is not going to die :-D


Agreed. This is actually your first post with which content I agree
totally. From your other posts I got the impression that you are one of
those people that are trying to make Usenet and WWW more similar to one
another.

-- Denis

Aug 26 '05 #104

John Bokma wrote:
"T Beck" <Tr********@Infineon.com> wrote:
If we argue that people are evolving the way e-mail is handled, and
adding entire new feature sets to something which has been around
since the earliest days of the internet, then that's perfectly
feasable. HTML itself has grown. We've also added Javascript and
Shockwave.


They are not additions to HTML, like PNG is no addition to HTML, or wav,
mp3, etc.

[snip]

Wasn't the point... I never said they were. HTML is at version 4.0(I
think?) now, AND we've added extra layers of stuff you can use
alongside of it. The internet is a free-flowing evolving place... to
try to protect one little segment like usenet from ever evolving is
just ensuring it's slow death, IMHO.

That's all...

--T Beck

Aug 26 '05 #105
On Fri, 26 Aug 2005, John Bokma wrote:
people from www to Usenet or vice versa is beyond me. If 80% of the current
Usenet users stop posting, Usenet is not going to die :-D


Heh. Quite the opposite, I reckon: it would get much better (higher SNR)! :-)

--
Rich Teer, SCNA, SCSA, OpenSolaris CAB member

President,
Rite Online Inc.

Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
Aug 26 '05 #106
T Beck wrote:

Wasn't the point... I never said they were. HTML is at version 4.0(I
think?) now, AND we've added extra layers of stuff you can use
alongside of it. The internet is a free-flowing evolving place... to
try to protect one little segment like usenet from ever evolving is
just ensuring it's slow death, IMHO.

That's all...


HTML is at version 4.01 to be precise, but that is precisely off-topic.
This discussion has being going on for long enough. It is held in a
large crosspost to newsgroups that have nothing to do with HTML or the
evolution of Usenet. The bottom line is that most Usenet users like it
the way it is now, and it certainly serves it's purpose. The Web and
Usenet should not mix as they are two distinct entities and merging them
would lose some of their distinctive qualities, making the Internet a
poorer place.

I suggest letting the matter rest or taking it to a more appropriate
newsgroup.

-- Denis
Aug 26 '05 #107
John Bokma wrote:
I have cookies off, with explicit exception for sites where
I want cookies. When the crappy website doesn't bother to MENTION that
it wants cookies, i.e. give me an error page, how am I to know that it
needs cookies? Do I want EVERY website to ask me "do you allow XY to
set a cookie?" NO!


So what do you want? An error page for every site that wants to set a
cookie?


No, the few sites where I actually have to log in to do anything useful,
when they're well-coded, tell me that they need cookies, and if I think
I like that website I make an exception entry for that site, allowing
cookies. Most sites just bombard you with useless, crap cookies (maybe
advertising), so they are silently ignored by my browser.

The only thing I hate is when I am directed to some website that needs
cookies, but doesn't tell me. A couple times I did a survey, wasting
maybe 10 minutes of my life for a good cause, and then there was an
error. Great! I guess that page needed cookies, but didn't bother to
tell me. Back button didn't work, either, so I just left that website.

OTOH, people who can't code can be fun, too, such as when you visit a
website and there are lots of PHP, Java, SQL, or ASP errors ;)

--
I believe in Karma. That means I can do bad things to people
all day long and I assume they deserve it.
Dogbert
Aug 26 '05 #108
>The only thing I hate is when I am directed to some website that needs
cookies, but doesn't tell me. A couple times I did a survey, wasting
maybe 10 minutes of my life for a good cause, and then there was an
error. Great! I guess that page needed cookies, but didn't bother to
tell me. Back button didn't work, either, so I just left that website.


Some sites do much worse than that. If you have cookies off, they
cause an infinite redirect loop. Sometimes my browser manages to
detect this after a few minutes and shut it off, and sometimes it
doesn't (usually on different sites). I think I can manually get
out of this with the STOP button, but until I do, it likely causes
a lot of useless load on the web site.

Gordon L. Burditt
Aug 26 '05 #109
CBFalconer wrote:
Chris Head wrote:

.... snip ...
Why can't we use the Web for what it was meant for: viewing
hypertext pages? Why must we turn it into a wrapper around every
application imaginable?

Because the Lord High PoohBah (Bill) has so decreed. He has
replaced General bullMoose.


Not particularly his doing. SGI was using a Netscape plugin to
distribute and install operating-system patches when Billionaire
"Intelligent Design" Billy was still denying that TCP/IP had a future.

And there are places for web forums: public feedback pages, for example.
(Add RSS and/or e-mail and/or NNTP feeds for more advanced users.)

--
John W. Kennedy
"The grand art mastered the thudding hammer of Thor
And the heart of our lord Taliessin determined the war."
-- Charles Williams. "Mount Badon"
Aug 26 '05 #110
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

John Bokma wrote:
Chris Head <ch*******@hotmail.com> wrote:

John Bokma wrote:
Additionally, a user interface operating inside an HTML
renderer can NEVER be as fast as a native-code user interface with
only the e-mail message itself passed through the renderer.

Nowadays, more then futile.


Sorry, I don't understand what you mean. Even on my 2.8GHz Pentium 4,
using Thunderbird to juggle messages is noticeably faster than
wandering around Hotmail. Complex HTML rendering still isn't
absolutely instantaneous.

It can be made much faster. There will always be a delay since messages
have to be downloaded, but with a fast connection and a good design, the
delay will be very very small and the advantages are big.


What advantages would those be (other than access from 'net cafes, but
see below)?

[snip]
... and purpose-built client applications (e.g. Thunderbird) don't?

if A -> B, it doesn't say that B -> A :-) I.e. that it works via HTML
doesn't mean it doesn't with a dedicated client ;-).

I live in Mexico, most people here rely on so called Internet cafes for
their connection, and even the use of a computer. For them Thunderbird
*doesn't work*.


This point I agree with. There are some situations - 'net cafes included
- - where thick e-mail clients don't work. Even so, see below.

Maybe I'm old-fashioned but I still very much prefer thick clients.
They simply feel much more solid. Perhaps part of it is that thin
clients have to communicate with the server at least a little bit for
just about everything they do, while thick clients can do a lot of
work without ANY Internet round-trip delay at all.

Each has it's place. A bug in a thick client means each and everyone has
to be fixed. With a thin one, just one has to be fixed :-D.


True. However, if people are annoyed by a Thunderbird bug, once it's
fixed, most people will probably go and download the fix (the
Thunderbird developers really only need to fix the bug once too).

Hotmail has to talk to the server to
move a message from one mailbox to another. Thunderbird doesn't.

Depends on where your mailbox resides. Isn't there something called
MAPI? (I haven't used it myself, but I recall something like that).


IMAP. It stores the messages on the server. Even so, it only has to
transfer the messages, not the bloated UI. I concede that Webmail might
be just as fast when using a perfectly-designed Javascript/frames-driven
interface. In the real world, Webmail isn't (unfortunately) that perfect.

As I said above regarding 'net cafes:

If the Internet cafe has an e-mail client installed on their computers,
you could use IMAP to access your messages. You'd have to do a bit more
configuration than for Webmail, so it depends on the user I guess.
Personally I doubt my ISP would like me saving a few hundred megs of
e-mail on their server, while Thunderbird is quite happy to have 1504
messages in my Inbox on my local machine. If I had to use an Internet
cafe, I would rather use IMAP than Webmail.

Ergo,
Thunderbird is faster as soon as the Internet gets congested.

Ah, yeah, wasn't that predicted to happen in like 2001?


Wasn't what predicted to happen? Congestion? It happens even today
(maybe it's the Internet, maybe it's the server, whatever...). Hotmail
is often pretty slow.

Also, unless you have some program that kills spam on the server, you
have to download all with Thunderbird. I remember a funny day when I got
2000 messages/hour due to a virus outbreak :-( With hotmail, if you have
100 new messages you download them when you read them. Or kill them when
you don't want to read.


Fortunately I'm not plagued by spam. I get around 150 messages per day.
Of those, about 140 are from a mailing list, 5 are personal, and 5 are
spam. I used to get about 100 messages per day of which 90 or so were
spam, but it suddenly stopped. To this day, I have not figured out why.
Nevertheless, I agree that not having to download all those messages is
one place where Webmail blows POP out of the water (but IMAP, which
could be a sort of "middle ground", doesn't suffer from this).

Chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (MingW32)

iD8DBQFDD0eM6ZGQ8LKA8nwRArpyAJwJ+W2Q2H2wZLrcNcj8Z7 0sCoBIswCfZZUV
DaaHKbqfADYKOWAE9APey7w=
=6Mmv
-----END PGP SIGNATURE-----
Aug 26 '05 #111
Denis Kasak <de*********@gmail.com> wrote:
John Bokma wrote:

so use Lynx :-)

One forum I visit is about scorpions. And really, it talks a bit
easier about scorpions if you have an image to look at :-D.

In short: Usenet = Usenet, and www = www. Why some people want to
move people from www to Usenet or vice versa is beyond me. If 80% of
the current Usenet users stop posting, Usenet is not going to die :-D


Agreed. This is actually your first post with which content I agree
totally. From your other posts I got the impression that you are one
of those people that are trying to make Usenet and WWW more similar to
one another.


No, not at all. My point is what I wrote above. But also, a lot of
functionality available on Usenet is also available on www. You don't
*have* to convert people to Usenet, and to warn them against a forum on
www. For day to day usage I experience hardly any difference. I mean, I am
not struggeling when I post a message on a web board. And if things take
time to download (for example a thread with many many pictures), I switch
to another tab and do something else (tab browsing is fun :-). Each has its
place. I don't want www on Usenet, and I don't want Usenet on www.

--
John Small Perl scripts: http://johnbokma.com/perl/
Perl programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html

Aug 26 '05 #112
Rich Teer <ri*******@rite-group.com> wrote:
On Fri, 26 Aug 2005, John Bokma wrote:
people from www to Usenet or vice versa is beyond me. If 80% of the
current Usenet users stop posting, Usenet is not going to die :-D


Heh. Quite the opposite, I reckon: it would get much better (higher
SNR)! :-)


:-D. I recently contributed to a thread in which someone was afraid that
Usenet was going to die, because he had the impression there where less
people on it. There are more people on it compared to 20 years ago, when it
was about to die (like every 2 years).

--
John Small Perl scripts: http://johnbokma.com/perl/
Perl programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html

Aug 26 '05 #113
Denis Kasak <de*********@gmail.com> wrote:
John Bokma wrote:

You can't be sure: errors in the handling of threads can cause a
buffer overflow, same for spelling checking :-D
Yes, they can, provided they are not properly coded. However, those
things only interact locally with the user and have none or very
limited interaction with the user on the other side of the line. As
such, they can hardly be exploitable.


Uhm... one post can affect a number of clients, hence quite exploitable.
Some people never use them, and hence they use memory and add risks.


On a good newsreader the memory use difference should be irrelevantly
small, even if one does not use the features. I would call that a
nitpicky argument.


Xnews - 10 M
Thunderbird - 20 M

There was a time I had only 128M in this computer :-D. And there was a time
I read news on a RISC OS machine. I guess the client was about 300 K (!).
Also, the risk in question is not comparable
because of the reasons stated above. The kind of risk you are talking
about happens with /any/ software.
True. The more code, the more possibilities on holes.
To stay away from that we shouldn't
have newsreaders (or any other software, for that matter) in the first
place.
telnet :-P.
Of course can HTML be useful on Usenet. The problem is that it will
be much more often abused instead of used.


No, you missed the point. I am arguing that HTML is completely and
utterly /useless/ on Usenet.


But I beg to differ :-). I can think of several *good* uses of HTML on
Usenet. But like I said, it will be abused. And you can't enforce a subset
of HTML.
Time spent for writing HTML in Usenet
But you are not going to *write* HTML, you let your client hide that. I
mean, it's not that hard to have a client turn *bold* into <strong>bold
</strong> :-).
posts is comparable to that spent on arguing about coding style or
Agreed, I have learned things from arguing on coding style, even adjusted
my style based on it.
writing followups to Xah Lee.
Ok, now there is something one shouldn't spent time on :-)
It adds no further insight on a
particular subject,
Yes, it does. That's why for example figures, tables, and now and then
colours are used in scientific publications. ASCII art, now that's a huge
waste of time.
but _does_ add further delays, spam, bandwidth
consumation, exploits, and is generally a pain in the arse. It's
redundant.


I have to disagree. Mind, I am not saying that HTML *should* be used on
Usenet, I am happy with Usenet as it is, but I wouldn't call it useless nor
redundant.

--
John Small Perl scripts: http://johnbokma.com/perl/
Perl programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html

Aug 26 '05 #114
Denis Kasak <de*********@gmail.com> wrote:
John Bokma wrote:
Ulrich Hobelmann <u.*********@web.de> wrote:
John Bokma wrote:
http://www.phpbb.com/mods/

Great. How can I, the user, choose, how to use a mod on a given web
server?
Ask the admin?


And that is, in your opinion, completely comparable to running your
own, private client?


Oh, but if you want your own private client, feel free to come up with
one. I for one would welcome an XML interface to for example phpBB. (Not
sure if such a thing exists). So I agree with you for a part.

However, how many people, do *need* a kill file? Most boards have active
moderators. Also, in my experience, most boards are a tighter knit crowd
with less need for kill filing.
Is the admin obliged to install the mod?
No, but in my experience, they listen.
Is the admin
even reachable?
Of course.
Otherwise: fetch HTML, parse it, restructure it, and have the
application run a local webserver. Python, Perl, piece of cake.


You seem to be forgetting that we are mainly talking about end users
here


No, I am not. Most end users of those boards don't *require* what you
want, you look at a board from a programmers point of view. And hence,
as a programmer you *can* do such things.
who most probably will not have the sufficient expertise to do all
moreover he/she doesn't care.
that. And even if they do, it's still time consuming.


If I am not happy with my Usenet client, I have the same problem. I like
Xnews for example, but AFAIK it's closed source. I don't know of any
open source Windows client that comes close to Xnews.

It's time consuming because there is (yet) no need for it. When I
started to use Usenet there where only a handful of clients (IIRC), nn
and another one (rn?) are the only ones that I can recall.

Like I said, it's not that hard to create a SOAP/XML-RPC interface to,
for example phpBB. Maybe it's already there. The tools are there. And a
next step could be to create a wrapper, one end acts as a local nntp
server, the other end talks using XML with phpBB.

Once that's written, you could use (probably within limits) a Usenet
client :-)

Now this seems to be cross posted in several comp.lang groups. Anyone?

--
John Small Perl scripts: http://johnbokma.com/perl/
Perl programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html

Aug 26 '05 #115
"T Beck" <Tr********@Infineon.com> wrote:

John Bokma wrote:
"T Beck" <Tr********@Infineon.com> wrote:
> If we argue that people are evolving the way e-mail is handled, and
> adding entire new feature sets to something which has been around
> since the earliest days of the internet, then that's perfectly
> feasable. HTML itself has grown. We've also added Javascript and
> Shockwave.
They are not additions to HTML, like PNG is no addition to HTML, or
wav, mp3, etc.

[snip]

Wasn't the point... I never said they were.


"HTML itself has grown. We've also added Javascript"

I read that as: JavaScript is an addition to HTML.
HTML is at version 4.0(I
think?)
4.01? And I think it will stay there, since XML seems to be the future.
now, AND we've added extra layers of stuff you can use
alongside of it. The internet is a free-flowing evolving place... to
try to protect one little segment like usenet from ever evolving is
just ensuring it's slow death, IMHO.


And if so, who cares? As long as people hang out on Usenet it will stay.
Does Usenet need al those extra gimmicks? To me, it would be nice if a
small set would be available. But need? No.

The death of Usenet has been predicted for ages. And I see only more and
more groups, and maybe more and more people on it.

As long as people who have to say something sensible keep using it, it
will stay.

--
John Small Perl scripts: http://johnbokma.com/perl/
Perl programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html

Aug 26 '05 #116
Ulrich Hobelmann <u.*********@web.de> wrote:
John Bokma wrote:
I have cookies off, with explicit exception for sites where
I want cookies. When the crappy website doesn't bother to MENTION
that it wants cookies, i.e. give me an error page, how am I to know
that it needs cookies? Do I want EVERY website to ask me "do you
allow XY to set a cookie?" NO!


So what do you want? An error page for every site that wants to set a
cookie?


No, the few sites where I actually have to log in to do anything
useful, when they're well-coded, tell me that they need cookies, and
if I think I like that website I make an exception entry for that
site, allowing cookies. Most sites just bombard you with useless,
crap cookies (maybe advertising), so they are silently ignored by my
browser.


Delete them after each session automatically, except the ones on the
exception list. You are clearly not an average user, so your usage pattern
probably only messes up the stats they obtain via cookies anyway.

I have long ago given up on manually accepting each and every cookie, and
trying to guess it's purpose.

--
John Small Perl scripts: http://johnbokma.com/perl/
Perl programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html

Aug 26 '05 #117
Chris Head <ch*******@hotmail.com> wrote:
John Bokma wrote:
Chris Head <ch*******@hotmail.com> wrote:
[HTML]
It can be made much faster. There will always be a delay since
messages have to be downloaded, but with a fast connection and a good
design, the delay will be very very small and the advantages are big.
What advantages would those be (other than access from 'net cafes, but
see below)?


And workplaces. Some people have more then one computer in the house. My
partner can check her email when I had her over the computer. When I
want to check my email when she is using it, I have to change the
session, fire up Thunderbird (which eats away 20M), and change the
session back.

[ .. ]
Each has it's place. A bug in a thick client means each and everyone
has to be fixed. With a thin one, just one has to be fixed :-D.


True. However, if people are annoyed by a Thunderbird bug, once it's
fixed, most people will probably go and download the fix (the
Thunderbird developers really only need to fix the bug once too).


Most people who use Thunderbird, yes. Different with OE, I am sure. With
a thin client *everybody*.
Depends on where your mailbox resides. Isn't there something called
MAPI? (I haven't used it myself, but I recall something like that).


IMAP. It stores the messages on the server. Even so, it only has to
transfer the messages, not the bloated UI.


But technically the UI (whether bloated or not) can be cached, and with
Ajax/Frames, etc. there is not really a need to refresh the entire page.
With smarter techniques (like automatically zipping pages), and
techniques like transmitting only deltas (Google experimented with this
some time ago) and better and faster rendering, the UI could be as fast
as a normal UI.

Isn't the UI in Thunderbird and Firefox created using JavaScript and
XML? Isn't how future UIs are going to be made?
I concede that Webmail
might be just as fast when using a perfectly-designed
Javascript/frames-driven interface. In the real world, Webmail isn't
(unfortunately) that perfect.
Maybe because a lot of users aren't really heavy users. A nice example
(IMO) of a web client that works quite good: webmessenger (
http://webmessenger.msn.com/ ). It has been some time since I used it
the last time, but if I recall correctly I hardly noticed that I was
chatting in a JavaScript pop up window.
As I said above regarding 'net cafes:

If the Internet cafe has an e-mail client installed on their
computers, you could use IMAP to access your messages. You'd have to
do a bit more configuration than for Webmail, so it depends on the
user I guess. Personally I doubt my ISP would like me saving a few
hundred megs of e-mail on their server, while Thunderbird is quite
happy to have 1504 messages in my Inbox on my local machine. If I had
to use an Internet cafe, I would rather use IMAP than Webmail.


I rather have my email stored locally :-) But several webmail services
offer a form to download email.
Ergo,
Thunderbird is faster as soon as the Internet gets congested.


Ah, yeah, wasn't that predicted to happen in like 2001?


Wasn't what predicted to happen? Congestion? It happens even today
(maybe it's the Internet, maybe it's the server, whatever...). Hotmail
is often pretty slow.


I read sometime ago that about 1/3 of traffic consists out of bittorrent
traffic... If the Internet gets congested, new techniques are needed,
like mod_gzip on every server, a way to transfer only deltas of webpages
if an update occured (like Google did some time ago). Better handling of
RSS (I have the impression that there is no "page has not been
modified" thing like with HTML, or at least I see quite some clients
fetch my feed every hour, again and again).

--
John Small Perl scripts: http://johnbokma.com/perl/
Perl programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html

Aug 26 '05 #118
John Bokma wrote:
[cookies]
Delete them after each session automatically, except the ones on the
exception list.
But why? I simply don't even take them, except my exception list ;)

Some people have all cookies turned off.
You are clearly not an average user, so your usage pattern
probably only messes up the stats they obtain via cookies anyway.

I have long ago given up on manually accepting each and every cookie, and
trying to guess it's purpose.


Exactly. That's why I don't do that. I just block them all, except
some good sites where I want to be auto-logged in.

--
I believe in Karma. That means I can do bad things to people
all day long and I assume they deserve it.
Dogbert
Aug 26 '05 #119
On Fri, 26 Aug 2005, John Bokma wrote:
And workplaces. Some people have more then one computer in the house. My
partner can check her email when I had her over the computer. When I


I know this is entirely inappropriate and OT, but am I th eonly person
who reads that sentence with a grin? The idea of my wife checking her
email while I'm "doing her" over my computer is most amusing! :-)

--
Rich Teer, SCNA, SCSA, OpenSolaris CAB member

President,
Rite Online Inc.

Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
Aug 26 '05 #120
> I know this is entirely inappropriate and OT, [...]

Yeah -- unlike the rest of this misbegotten thread, which is right
bang on-topic for all five newsgroups and is not suffering at all from
topic drift, no not in the least.

b
Aug 26 '05 #121
John Bokma <jo**@castleamber.com> writes:
Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
Mike Meyer <mw*@mired.org> writes:
> Another advantage is that evewry internet-enabled computer today
> already comes with an HTML renderer (AKA browser)

No, they don't. Minimalist Unix distributions don't include a browser
by default. I know the BSD's don't, and suspect that gentoo Linux
doesn't.


Lynx?


Emacs?


Neither one is installed by default on the systems in question. Both
are available via the system packaging tools.

fetch is installed on FreeBSD, but all it does is download the
contents of a URL - it doesn't render them.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 27 '05 #122
go***********@burditt.org (Gordon Burditt) writes:
HTML is designed to degrade gracefully (never mind that most web
authors and many browser developers don't seem to comprehend this), so
you don't really need a "subset" html to get the safety features you
want. All you need to do is disable the appropriate features in the
HTML renderer in your news and mail readers. JavaScript, Java, and any
form of object embedding. Oh yeah, and frames.


And links. And cookies. And any kind of external site or local
file access. And browser history.


That depends on whether you're trying to keep an HTML message from
doing anything nasty (like revealing that you read it) when you render
it, or to make sure it *never* does anything nasty, no matter what you
do with the message.

If all you want is the former - which is what the OP asked for, and I
was replying to - then nothing on the list you gave is required. Some
of the things you list are a danger even without HTML; most modern
news/mail readers will follow links in flat ascii.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 27 '05 #123
Ulrich Hobelmann <u.*********@web.de> writes:
No, the few sites where I actually have to log in to do anything
useful, when they're well-coded, tell me that they need cookies, and
if I think I like that website I make an exception entry for that
site, allowing cookies. Most sites just bombard you with useless,
crap cookies (maybe advertising), so they are silently ignored by my
browser.
I believe (but I'm not sure) that some releases of apache could be
configured in such a way that they would start using cookies without
you having to turn them on.
The only thing I hate is when I am directed to some website that needs
cookies, but doesn't tell me. A couple times I did a survey, wasting
maybe 10 minutes of my life for a good cause, and then there was an
error. Great! I guess that page needed cookies, but didn't bother to
tell me. Back button didn't work, either, so I just left that website.


Try turning off JavaScript (I assume you don't because you didn't
complain about it). Most of the sites on the web that use it don't
even use the NOSCRIPT tag to notify you that you have to turn the
things on - much less use it to do something useful.

Sturgeon's law applies to web sites, just like it does to everything
else.

<mike

--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 27 '05 #124
John Bokma <jo**@castleamber.com> writes:
Chris Head <ch*******@hotmail.com> wrote:
I mean, the way
Webmail works, you're at the message list and click on a message to
view. This causes a whole new page, user-interface and all, to be
loaded. In comparison, that's like shutting down and re-opening your
e-mail program for every single message you want to view!

This can be designed much better by using iframes, maybe even Ajax.


Definitely with Ajax. That's one of the things it does really well.
Why can't we use the Web for what it was meant for: viewing hypertext
pages? Why must we turn it into a wrapper around every application
imaginable?

Because it works?


Because you can - if you know how to use HTML properly - distribute
your application to platforms you've never even heard of - like the
Nokia Communicator.

I started writing web apps when I was doing internal tools development
for a software development company that had 90+ different platform
types installed inhouse. It was a *godsend*. By deploying one
well-written app, I could make everyone happy, without having to do
versions for the Mac, Windows, DOS (this was a while ago), getting it
to compile on umpteen different Unix version, as well as making it
work on proprietary workstation OS's.

Of course, considering the state of most of the HTML on the web, I
have *no* idea why most of them are doing this.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 27 '05 #125
John Bokma <jo**@castleamber.com> writes:
It's time consuming because there is (yet) no need for it. When I
started to use Usenet there where only a handful of clients (IIRC), nn
and another one (rn?) are the only ones that I can recall.


By the time nn was out, there were a number of radically diffrent
alternatives. The original news client (not NNTP - it predated that)
was readnews. rn was the first alternative to gain any popularity. By
the time it came out, there were alterntiave curses-based readers like
notes and vnews. By the time nn came out, there were even X-based news
readers available like xrn and xvnews.

It may be that the site you were at only offered a few readers. But
that's a different issue.

All of this is from memory, of course - and may well be wrong.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 27 '05 #126
Rich Teer <ri*******@rite-group.com> wrote:
On Fri, 26 Aug 2005, John Bokma wrote:
And workplaces. Some people have more then one computer in the house. My
partner can check her email when I had her over the computer. When I


I know this is entirely inappropriate and OT, but am I th eonly person
who reads that sentence with a grin? The idea of my wife checking her
email while I'm "doing her" over my computer is most amusing! :-)


Aargh :-D.

--
John Small Perl scripts: http://johnbokma.com/perl/
Perl programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html

Aug 27 '05 #127
Mike Meyer <mw*@mired.org> wrote:
John Bokma <jo**@castleamber.com> writes:
Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
Mike Meyer <mw*@mired.org> writes:
> Another advantage is that evewry internet-enabled computer today
> already comes with an HTML renderer (AKA browser)

No, they don't. Minimalist Unix distributions don't include a browser
by default. I know the BSD's don't, and suspect that gentoo Linux
doesn't.

Lynx?


Emacs?


Neither one is installed by default on the systems in question. Both
are available via the system packaging tools.

fetch is installed on FreeBSD, but all it does is download the
contents of a URL - it doesn't render them.


My brain does :-D

--
John Small Perl scripts: http://johnbokma.com/perl/
Perl programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html

Aug 27 '05 #128
Mike Meyer <mw*@mired.org> wrote:
John Bokma <jo**@castleamber.com> writes:
It's time consuming because there is (yet) no need for it. When I
started to use Usenet there where only a handful of clients (IIRC), nn
and another one (rn?) are the only ones that I can recall.
By the time nn was out, there were a number of radically diffrent
alternatives. The original news client (not NNTP - it predated that)
was readnews. rn was the first alternative to gain any popularity. By
the time it came out, there were alterntiave curses-based readers like
notes and vnews. By the time nn came out, there were even X-based news
readers available like xrn and xvnews.


I recall something like pine? (or was that mail, and was there something
pine related for usenet?)
It may be that the site you were at only offered a few readers.
Probably more correct: I now and then used telnet to connect, so uhm.. no
X. And more important, I didn't look further :-)
But
that's a different issue.

All of this is from memory, of course - and may well be wrong.


Those were the days, thanks.

--
John Small Perl scripts: http://johnbokma.com/perl/
Perl programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html

Aug 27 '05 #129
i left the usenet in the latter half of the '80s. a few weeks
ago i decided i wanted to do a new project with a new language,
and chose python. so i joined this mailing list, which is
gated to the usenet. i am impressed that the s:n has not
gotten significantly worse than when i left, about 0.25, this
message being my contribution to the noise.

the s here is pretty darn good. but the n is pretty silly.

randy

Aug 27 '05 #130
On Fri, 26 Aug 2005 22:33:22 GMT, Rich Teer wrote:
On Fri, 26 Aug 2005, John Bokma wrote:
..My partner can check her email when I had her over the computer.
.... ...The idea of my wife checking her
email while I'm "doing her" over my computer is most amusing! :-)


It does raise the question though. Is it mundane sex,
...or riveting email, that causes this phenomenon? ;-)

[ F'Ups set to c.l.j.p. only ]

--
Andrew Thompson
physci.org 1point1c.org javasaver.com lensescapes.com athompson.info
"You live with apes, man, it's hard to be clean."
Marilyn Manson 'The Beautiful People'
Aug 27 '05 #131
Mike Meyer wrote:
This can be designed much better by using iframes, maybe even Ajax.
Definitely with Ajax. That's one of the things it does really well.


But then you're probably limited to the big 4 of browsers: MSIE,
Mozilla, KHTML/Safari, Opera. Ok, that should cover most desktop users,
but you might run into problems on embedded.

I've also noticed that especially web forums and dynamic websites take
up looots of memory on my machine (but then I have loooots).
Why can't we use the Web for what it was meant for: viewing hypertext
pages? Why must we turn it into a wrapper around every application
imaginable?

Because it works?


Because you can - if you know how to use HTML properly - distribute
your application to platforms you've never even heard of - like the
Nokia Communicator.


If the NC has software that can properly interpret all that HTML, CSS,
JavaScript plus image formats, yes. But who guarantees that? I'd
rather develop a native client for the machine that people actually WANT
to use, instead of forcing them to use that little-fiddly web browser on
a teeny tiny display.

And again: connections might be slow, a compact protocol is better than
loading the whole UI every time. And while Ajax might work, despite the
UI being maybe too big for the little browser window, and even if it
works, it's still probably more work than a simple, native UI. First of
all it needs to load all the JS on first load, secondly sometimes for a
flexible UI you'd have to replace huge parts of the page with something
else. Native UIs are more up to the task.
I started writing web apps when I was doing internal tools development
for a software development company that had 90+ different platform
types installed inhouse. It was a *godsend*. By deploying one
If that's 90+ GUI platforms, then I agree. I just wonder who wrote
fully standards compliant web browsers for those 90 platforms. If you
have one Windows GUI (maybe C#), one Mac GUI (Cocoa), one Gtk GUI for X,
you're done. A GUI should be the smallest bunch of work on any given
application, so it's not prohibitive to write a couple of them, IMHO.
But then I've only ever used Swing and Cocoa and the latter *is* really
convenient, might be that the others are a PITA, who knows...
well-written app, I could make everyone happy, without having to do
versions for the Mac, Windows, DOS (this was a while ago), getting it
to compile on umpteen different Unix version, as well as making it
work on proprietary workstation OS's.
Well, stick to POSIX and X APIs and your stuff should run fine on pretty
much all Unices. I never understood those people who write all kinds of
weird ifdefs to run on all Unices. Maybe that was before my time,
during the Unix wars, before POSIX. And if it's not Unix, what's a
prop. workstation OS?
Of course, considering the state of most of the HTML on the web, I
have *no* idea why most of them are doing this.


Yep. Maybe it would be best to reengineer the whole thing as ONE UI
spec+action language, incompatible with the current mess, compact, so it
can be implemented with minimum fuss. And most of all, I wouldn't use a
MARKUP language, as a real application is not text-based (at least not
as characteristic #1).

--
I believe in Karma. That means I can do bad things to people
all day long and I assume they deserve it.
Dogbert
Aug 27 '05 #132
Mike Meyer wrote:
Try turning off JavaScript (I assume you don't because you didn't
complain about it). Most of the sites on the web that use it don't
even use the NOSCRIPT tag to notify you that you have to turn the
things on - much less use it to do something useful.
I had JS off for a long time, but now so many websites expect it, and
even make browsing more convenient, that I grudgingly accepted it. ;)
Sturgeon's law applies to web sites, just like it does to everything
else.


Yep. Filtering is the future in the overloaded world.

--
I believe in Karma. That means I can do bad things to people
all day long and I assume they deserve it.
Dogbert
Aug 27 '05 #133
In comp.lang.perl.misc John Bokma <jo**@castleamber.com> wrote:
Chris Head <ch*******@hotmail.com> wrote:
What advantages would those be (other than access from 'net cafes, but
see below)?

And workplaces. Some people have more then one computer in the house. My
partner can check her email when I had her over the computer. When I
want to check my email when she is using it, I have to change the
session, fire up Thunderbird (which eats away 20M), and change the
session back.


Not a Windows solution, but I find the 'screen' utility invaluable as
I can have my email, news, and an editor open in different screens
and then when I need to move to a different machine, I can simply
detach and reattach screen without disturbing anything that
might be running.

Axel
Aug 27 '05 #134
Ulrich Hobelmann <u.*********@web.de> writes:
Mike Meyer wrote:
This can be designed much better by using iframes, maybe even Ajax. Definitely with Ajax. That's one of the things it does really well.

But then you're probably limited to the big 4 of browsers: MSIE,
Mozilla, KHTML/Safari, Opera. Ok, that should cover most desktop
users, but you might run into problems on embedded.


True - using Ajax definitely defeats what I consider to be the best
feature of the web.
Why can't we use the Web for what it was meant for: viewing hypertext
pages? Why must we turn it into a wrapper around every application
imaginable?
Because it works?

Because you can - if you know how to use HTML properly - distribute
your application to platforms you've never even heard of - like the
Nokia Communicator.

If the NC has software that can properly interpret all that HTML, CSS,
JavaScript plus image formats, yes. But who guarantees that?


You don't need that guarantee. All you need is a reasonable HTML
renderer. The folks at W3C are smart, and did a good job of designing
the technologies so they degrade gracefully. Anyone with any
competence can design web pages that will both take advantage of
advanced technologies if they are present and still work properly if
they aren't. Yeah, the low-end interface harks back to 3270s, but IBM
had a *great* deal of success with that technology.
I'd rather develop a native client for the machine that people
actually WANT to use, instead of forcing them to use that
little-fiddly web browser on a teeny tiny display.
You missed the point: How are you going to provide native clients for
platforms you've never heard of?
And again: connections might be slow, a compact protocol is better
than loading the whole UI every time. And while Ajax might work,
despite the UI being maybe too big for the little browser window, and
even if it works, it's still probably more work than a simple, native
UI. First of all it needs to load all the JS on first load, secondly
sometimes for a flexible UI you'd have to replace huge parts of the
page with something else. Native UIs are more up to the task.
I'm not arguing that native UI's aren't better. I'm arguing that web
applications provide more portability - which is important for some
applications and some developers.
I started writing web apps when I was doing internal tools development
for a software development company that had 90+ different platform
types installed inhouse. It was a *godsend*. By deploying one

If that's 90+ GUI platforms, then I agree.


Why do you care if they are GUI or not? If you need to provide the
application for them, you need to provide the application for
them. Them not being GUI just means you can't try and use a standard
GUI library. It also means you have to know what you're doing when you
write HTML so that it works properly in a CLUI. But your native app
would have to have a CLUI anyway.
I just wonder who wrote fully standards compliant web browsers for
those 90 platforms.
Nobody. I doubt there's a fully standards compliant web browser
available for *any* platform, much less any non-trivial collection of
them. You write portable web applications to the standards, and design
them to degrade gracefully. Then you go back and work around any new
bugs you've uncovered in the most popular browsers - which
historically are among the *worst* at following standards.
If you have one Windows GUI (maybe C#), one Mac GUI (Cocoa), one Gtk
GUI for X, you're done.
You think you're done. A lot of developers think you can stop with the
first one or two. You're all right for some applications. For others,
you're not. Personally, I like applications that run on all the
platforms I use - and your set doesn't cover all three of those
systems.
well-written app, I could make everyone happy, without having to do
versions for the Mac, Windows, DOS (this was a while ago), getting it
to compile on umpteen different Unix version, as well as making it
work on proprietary workstation OS's.

Well, stick to POSIX and X APIs and your stuff should run fine on
pretty much all Unices.


You know, the same kind of advice applies to writing portable web
apps. Except when you do it with HTML, "portability" means damn near
any programmable device with a network interface, not some relatively
small fraction of all deployed platforms.
I never understood those people who write all kinds of weird ifdefs
to on all Unices. Maybe that was before my time, during the
Unix wars, before POSIX.
There were standards before POSIX. They didn't cover everything people
wanted to do, or didn't do them as fast as the OS vendor wanted. So
Unix vendors added their own proprietary extensions, which software
vendors had to use to get the best performance out of their
applications, which they had to do if they wanted people to buy/use
them.

That's still going on - people are adding new functionality that isn't
covered by POSIX to Unix systems all the time, or they are adding
alternatives that are better/faster than the POSIX version, and there
are lots of things that applications want to do that simply aren't
covered by POSIX. And not all implementations are created equal. Some
platforms malloc's provide - to be polite - less than optimal
performance under conditions real applications encounter, so those
applications conditionally use different malloc implementations. The
same thing applies to threads, except such code typically includes a
third option of not using threads at all. And so on.

And we haven't even started talking about the build process...

Basically, deciding to write to POSIX is a decision to trade away
performance on/to some platforms for portability to more
platforms. It's the same decision as deciding to write a web app,
except the tradeoffs are different. Each of the three solutions has a
different set of costs and benefits, and the correct choice will
depend on your application.
And if it's not Unix, what's a prop. workstation OS?


They've mostly died out since then. At the time, there were things
like Domain and VMS.
Of course, considering the state of most of the HTML on the web, I
have *no* idea why most of them are doing this.

Yep. Maybe it would be best to reengineer the whole thing as ONE UI
spec+action language, incompatible with the current mess, compact, so
it can be implemented with minimum fuss. And most of all, I wouldn't
use a MARKUP language, as a real application is not text-based (at
least not as characteristic #1).


You mean most of the applications I run aren't real applications?
Right now, my desktop has exactly two GUI applications open on it - a
mixer and gkrellm. Everything else is characeter based. Hell, even my
window manager is character based.

I think you're right - a web standard designed for writing real
applications probably wouldn't start life as a markup for text. The
only thing I can think of that even tries is Flash, but it's
proprietary so I don't know much about it.

Care to tell me how you would design such a format if the goal were to
*not* lose any portability - which means it has to be possible to
design interfaces that work properly on character devices, things like
Palms three-color greyscale displays, and devices without pointers or
without keyboards, or even in an audio-only environment.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 27 '05 #135
On Sat, 27 Aug 2005, Mike Meyer wrote:
I think you're right - a web standard designed for writing real
applications probably wouldn't start life as a markup for text. The
only thing I can think of that even tries is Flash, but it's


What about Java?

--
Rich Teer, SCNA, SCSA, OpenSolaris CAB member

President,
Rite Online Inc.

Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
Aug 27 '05 #136
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

John Bokma wrote:
Chris Head <ch*******@hotmail.com> wrote:

John Bokma wrote:
Chris Head <ch*******@hotmail.com> wrote:

[HTML]

It can be made much faster. There will always be a delay since
messages have to be downloaded, but with a fast connection and a good
design, the delay will be very very small and the advantages are big.
What advantages would those be (other than access from 'net cafes, but
see below)?

And workplaces. Some people have more then one computer in the house. My
partner can check her email when I had her over the computer. When I
want to check my email when she is using it, I have to change the
session, fire up Thunderbird (which eats away 20M), and change the
session back.

[ .. ]


Hmm. That would just be a matter of preference. Personally I moved my
Thunderbird profile into a shared directory and pointed everyone at it.
Now only one login session can run Thunderbird at a time, but any login
can see everyone's mailboxes.

Each has it's place. A bug in a thick client means each and everyone
has to be fixed. With a thin one, just one has to be fixed :-D.
True. However, if people are annoyed by a Thunderbird bug, once it's
fixed, most people will probably go and download the fix (the
Thunderbird developers really only need to fix the bug once too).

Most people who use Thunderbird, yes. Different with OE, I am sure. With
a thin client *everybody*.


True. As a programmer I don't usually think about the people who never
download updates. The way I look at it, if somebody doesn't have the
latest version, they shouldn't be complaining about a bug. I guess thin
clients could be taken to mean you have a very light-weight auto-update
system ;)

Depends on where your mailbox resides. Isn't there something called
MAPI? (I haven't used it myself, but I recall something like that).
IMAP. It stores the messages on the server. Even so, it only has to
transfer the messages, not the bloated UI.

But technically the UI (whether bloated or not) can be cached, and with
Ajax/Frames, etc. there is not really a need to refresh the entire page.
With smarter techniques (like automatically zipping pages), and
techniques like transmitting only deltas (Google experimented with this
some time ago) and better and faster rendering, the UI could be as fast
as a normal UI.

Isn't the UI in Thunderbird and Firefox created using JavaScript and
XML? Isn't how future UIs are going to be made?


I believe it is. I'm not sure if it's a good idea, but that's neither
here nor there.

I concede that Webmail
might be just as fast when using a perfectly-designed
Javascript/frames-driven interface. In the real world, Webmail isn't
(unfortunately) that perfect.

Maybe because a lot of users aren't really heavy users. A nice example
(IMO) of a web client that works quite good: webmessenger (
http://webmessenger.msn.com/ ). It has been some time since I used it
the last time, but if I recall correctly I hardly noticed that I was
chatting in a JavaScript pop up window.


Haven't ever needed to use that program.

As I said above regarding 'net cafes:

If the Internet cafe has an e-mail client installed on their
computers, you could use IMAP to access your messages. You'd have to
do a bit more configuration than for Webmail, so it depends on the
user I guess. Personally I doubt my ISP would like me saving a few
hundred megs of e-mail on their server, while Thunderbird is quite
happy to have 1504 messages in my Inbox on my local machine. If I had
to use an Internet cafe, I would rather use IMAP than Webmail.

I rather have my email stored locally :-) But several webmail services
offer a form to download email.


I've not seen a service that allows that. Sounds nice.

Ergo,
Thunderbird is faster as soon as the Internet gets congested.

Ah, yeah, wasn't that predicted to happen in like 2001?


Wasn't what predicted to happen? Congestion? It happens even today
(maybe it's the Internet, maybe it's the server, whatever...). Hotmail
is often pretty slow.

I read sometime ago that about 1/3 of traffic consists out of bittorrent
traffic... If the Internet gets congested, new techniques are needed,
like mod_gzip on every server, a way to transfer only deltas of webpages
if an update occured (like Google did some time ago). Better handling of
RSS (I have the impression that there is no "page has not been
modified" thing like with HTML, or at least I see quite some clients
fetch my feed every hour, again and again).


Eventually you reach the point where it's not bandwidth any more, it's
server load. All these things like mod_gzip, deltas, and so on add
server load.

As to the point about "page not modified", it's not in the HTML spec,
it's in the HTTP spec. RFC2616 (HTTP1.1) defines an "If-Modified-Since"
header a client may send to the server indicating that it has a cached
copy of the page at that date. If the page has not changed, the server
should send HTTP 304 (not modified) with no content. For best results
(due to clock mismatches etc), the client should set the
If-Modified-Since header to the value of the Last-Modified header sent
by the server when the page was first requested and cached.

I think we can agree that in some cases, Webmail is better, and in
others, clients are better. Much of this will be personal preference,
and I would like to see ISPs offering both methods of accessing e-mail
(as mine in fact does - POP3 and Webmail).

Chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (MingW32)

iD8DBQFDELff6ZGQ8LKA8nwRApxiAKDBU2R5KYAhp/4MJDoLlrbC5hWpLgCeNbnh
YK2tCasrMOY3SaUV1gMtZdg=
=N2wk
-----END PGP SIGNATURE-----
Aug 27 '05 #137
Ulrich Hobelmann <u.*********@web.de> writes:
Mike Meyer wrote:
Try turning off JavaScript (I assume you don't because you didn't
complain about it). Most of the sites on the web that use it don't
even use the NOSCRIPT tag to notify you that you have to turn the
things on - much less use it to do something useful.

I had JS off for a long time, but now so many websites expect it, and
even make browsing more convenient, that I grudgingly accepted it. ;)


I've turned it on because I'm using an ISP that requires me to log in
via a javascript only web page. They have a link that claims to let
non-JS browsers log in, but it doesn't work. My primary browser
doesn't support JavaScript or CSS, and is configured with images
turned off, mostly because I want the web to be fast. What it does
have is the ability to launch one of three different external browsers
on either the current page or any link on the page, so those faciities
are a few keystrokes away. The default browser on my mac has all that
crap turned on, but I dont have anything I consider either import or
sensitive on it, and the only client who has data on it considers such
thinsg an acceptable risk.
Sturgeon's law applies to web sites, just like it does to everything
else.

Yep. Filtering is the future in the overloaded world.


And to the best of my knowledge, Google filters out content that my
primary desktop browser can't display.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 27 '05 #138
ax**@white-eagle.invalid.uk writes:
In comp.lang.perl.misc John Bokma <jo**@castleamber.com> wrote:
Chris Head <ch*******@hotmail.com> wrote:
What advantages would those be (other than access from 'net cafes, but
see below)?

And workplaces. Some people have more then one computer in the house. My
partner can check her email when I had her over the computer. When I
want to check my email when she is using it, I have to change the
session, fire up Thunderbird (which eats away 20M), and change the
session back.

Not a Windows solution, but I find the 'screen' utility invaluable as
I can have my email, news, and an editor open in different screens
and then when I need to move to a different machine, I can simply
detach and reattach screen without disturbing anything that
might be running.


For a more portable solution, check out VNC.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 27 '05 #139
previously i've made serious criticisms on Python's documentations
problems.
(see http://xahlee.org/perl-python/re-write_notes.html )

I have indicated that a exemplary documentation is Wolfram Research
Incorporated's Mathematica language. (available online at
http://documents.wolfram.com/mathematica/ )

Since Mathematica is a proprietary language costing over a thousand
dollars and most people in the IT industry are not familiar with it, i
like to announce a new discovery:

this week i happened to read the documentation of Microsoft's
JavaScript. See
http://msdn.microsoft.com/library/en...ndamentals.asp

This entire documentary is a paragon of technical writing. It has
clarity, conciseness, and precision. It does not abuse jargons, it
doesn't ramble, it doesn't exhibit author masturbation, and it covers
its area extremely well and complete. The documentation set are very
well organized into 3 sections: Fundamentals, Advanced, Reference. The
tutorial section “fundamentals” is extremely simple and to the
point. The “advanced” section gives a very concise yet easyto read
on some fine details of the language. And its language reference
section is complete and exact.

I would like the IT industry programers and the OpenSource fuckheads to
take note of this documentation so that you can learn.

Also, this is not the only good documentation in the industry. As i
have indicated, Mathematica documentation is equally excellent. In
fact, the official Java documentation (so-called Java API by Sun
Microsystems) is also extremely well-written, even though that Java the
language is unnecessarily very complex and involves far more technical
concepts that necessitate use of proper jargons as can be seen in their
doc.

A additional note i like to tell the OpenSource coding morons in the
industry, is that in general the fundamental reason that Perl, Python,
Unix, Apache etc documentations are extremely bad in multiple aspects
is because of OpenSource fanaticism. The fanaticism has made it that
OpenSource people simply became UNABLE to discern quality. This
situation can be seen in the responses of criticisms of OpenSource
docs. What made the situation worse is the OpenSource's mantra of
“contribution” — holding hostile any negative criticism unless
the critic “contributed” without charge.

Another important point i should point out is that the OpenSource
morons tend to attribute “lack of resources” as a excuse for their
lack of quality. (when they are kicked hard to finally admit that they
do lack quality in the first place) No, it is not lack of resources
that made the OpenSource doc criminally incompetent. OpenSource has
created tools that take far more energy and time than writing manuals.
Lack of resource of course CAN be a contribution reason, along with
OpenSource coder's general lack of ability to write well, among other
reasons, but the main cause as i have stated above, is OpenSource
fanaticism. It is that which have made them blind.

PS just to note, that my use of OpenSource here do not include Free
Software Foundation's Gnu's Not Unix project. GNU project in general
has very excellent documentation. GNU docs are geeky in comparison to
the commercial entity's docs, but do not exhibit jargon abuse,
rambling, author masturbation, or hodgepodge as do the OpenSource ones
mentioned above.

Xah
xa*@xahlee.org
http://xahlee.org/

Aug 27 '05 #140
On Sat, 27 Aug 2005, Xah Lee wrote:

His usual crap.

___________________
/| /| | |
||__|| | Please do |
/ O O\__ NOT |
/ \ feed the |
/ \ \ trolls |
/ _ \ \ ______________|
/ |\____\ \ ||
/ | | | |\____/ ||
/ \|_|_|/ \ __||
/ / \ |____| ||
/ | | /| | --|
| | |// |____ --|
* _ | |_|_|_| | \-/
*-- _--\ _ \ // |
/ _ \\ _ // | /
* / \_ /- | - | |
* ___ c_c_c_C/ \C_c_c_c____________

--
Rich Teer, SCNA, SCSA, OpenSolaris CAB member

President,
Rite Online Inc.

Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
Aug 27 '05 #141
On 27 Aug 2005 14:59:34 -0700, "Xah Lee" <xa*@xahlee.org> declaimed the
following in comp.lang.python:

<tsk> Opens with a tirade about Python documentation yet cross-posts
to FIVE groups.
I would like the IT industry programers and the OpenSource fuckheads to
take note of this documentation so that you can learn.
And a statement like this is supposed to encourage such folks to
listen to you?

That's like a politician calling the people with the money "ignorant
peons" as he asks them to donate to his campaign.

-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Home Page: <http://www.dm.net/~wulfraed/> <
Overflow Page: <http://wlfraed.home.netcom.com/> <

Aug 28 '05 #142
Rich Teer <ri*******@rite-group.com> writes:
On Sat, 27 Aug 2005, Mike Meyer wrote:
I think you're right - a web standard designed for writing real
applications probably wouldn't start life as a markup for text. The
only thing I can think of that even tries is Flash, but it's

What about Java?


Using HTML, I can build applications that work properly on anything
from monochrome terminals to the latest desktop box. Is there a
UI toolkit for Java that's that flexible?

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.

Aug 28 '05 #143
Mike Meyer wrote:
I'd rather develop a native client for the machine that people
actually WANT to use, instead of forcing them to use that
little-fiddly web browser on a teeny tiny display.
You missed the point: How are you going to provide native clients for
platforms you've never heard of?


Who says I have to? With open protocols, everybody can. I know many
platforms that STILL don't have a browser that would work with most
websites out there. They all have NNTP, SMTP and POP clients.
Text-mode, GUI-mode, your choice.
And again: connections might be slow, a compact protocol is better
than loading the whole UI every time. And while Ajax might work,
despite the UI being maybe too big for the little browser window, and
even if it works, it's still probably more work than a simple, native
UI. First of all it needs to load all the JS on first load, secondly
sometimes for a flexible UI you'd have to replace huge parts of the
page with something else. Native UIs are more up to the task.


I'm not arguing that native UI's aren't better. I'm arguing that web
applications provide more portability - which is important for some
applications and some developers.


Like Java provides more portability. Unless you ran NetBSD in 2003
(there was no Java back then that worked for me), hm, IRIX?, Plan9, BeOS
the list goes on... LOTS of platforms don't have the manpower to
develop a client that renders all of the huge bloated wagonload of W3C
tech that was only designed for *markup* from the beginning.
I started writing web apps when I was doing internal tools development
for a software development company that had 90+ different platform
types installed inhouse. It was a *godsend*. By deploying one

If that's 90+ GUI platforms, then I agree.


Why do you care if they are GUI or not? If you need to provide the
application for them, you need to provide the application for
them. Them not being GUI just means you can't try and use a standard
GUI library. It also means you have to know what you're doing when you
write HTML so that it works properly in a CLUI. But your native app
would have to have a CLUI anyway.


Ok, UI then ;)
I don't care what UIs people like and use.
I just wonder who wrote fully standards compliant web browsers for
those 90 platforms.


Nobody. I doubt there's a fully standards compliant web browser


Nobody, huh? Then how could you run just ANY web application on those
platforms?
available for *any* platform, much less any non-trivial collection of
them. You write portable web applications to the standards, and design
them to degrade gracefully. Then you go back and work around any new
Oh right, they degrade gracefully. So without Javascript or cookies
(the former is often not implemented) you get a HTML page with an error
notice -- if you're lucky.

A server AND client for a simple protocol designed for its task (i.e.
not FTP for instance) can be implemented in much less work than even
designing even part of a web application backend that does that kind of
stuff. Plus you're not bound by HTTP request structure, you can use
publish/subscribe or whatever communication style you want for efficiency.
bugs you've uncovered in the most popular browsers - which
historically are among the *worst* at following standards.
If you have one Windows GUI (maybe C#), one Mac GUI (Cocoa), one Gtk
GUI for X, you're done.
You think you're done. A lot of developers think you can stop with the
first one or two. You're all right for some applications. For others,
you're not. Personally, I like applications that run on all the
platforms I use - and your set doesn't cover all three of those
systems.


Ok, I'd be interested to hear what those are. VMS, RiscOS, Mac OS 9...?
well-written app, I could make everyone happy, without having to do
versions for the Mac, Windows, DOS (this was a while ago), getting it
to compile on umpteen different Unix version, as well as making it
work on proprietary workstation OS's.

Well, stick to POSIX and X APIs and your stuff should run fine on
pretty much all Unices.


You know, the same kind of advice applies to writing portable web
apps. Except when you do it with HTML, "portability" means damn near
any programmable device with a network interface, not some relatively
small fraction of all deployed platforms.


Only that even years ago lots of even small platforms would run X, but
even today MANY platforms don't run a browser with XHTML/HTML4+JS+CSS
(well, okay, the CSS isn't that important).
I never understood those people who write all kinds of weird ifdefs
to on all Unices. Maybe that was before my time, during the
Unix wars, before POSIX.


There were standards before POSIX. They didn't cover everything people
wanted to do, or didn't do them as fast as the OS vendor wanted. So
Unix vendors added their own proprietary extensions, which software
vendors had to use to get the best performance out of their
applications, which they had to do if they wanted people to buy/use
them.


Performance? Hm, like epoll/kqueue vs select? Can't think of examples
here, but I pretty much only know BSD/POSIX.
That's still going on - people are adding new functionality that isn't
covered by POSIX to Unix systems all the time, or they are adding
alternatives that are better/faster than the POSIX version, and there
are lots of things that applications want to do that simply aren't
covered by POSIX. And not all implementations are created equal. Some
platforms malloc's provide - to be polite - less than optimal
performance under conditions real applications encounter, so those
applications conditionally use different malloc implementations. The
Well, OF COURSE malloc is ONE general purpose function that HAS to carry
some overhead. I routinely use my frontend(s) for it, to cluster
allocations locally (for caching and alloc performance). Matter of a
100 LOC usually. No problem at all.

If a system's scheduler, or select implementation sucks, though, I'd
complain to the vendor or simply abandon the platform for another.
Competition is good :)
same thing applies to threads, except such code typically includes a
third option of not using threads at all. And so on.
Well, who doesn't do threads after several years of POSIX IMHO can't be
taken seriously. Ok, the BSDs didn't until recently, but those are
volunteer projects.
And we haven't even started talking about the build process...
If the libraries are installed, just build and link it (if you use
standard C, POSIX + 3rd party libs that do the same). If not, then
tough luck -- it couldn't even run in theory then.
Basically, deciding to write to POSIX is a decision to trade away
performance on/to some platforms for portability to more
platforms. It's the same decision as deciding to write a web app,
except the tradeoffs are different. Each of the three solutions has a
different set of costs and benefits, and the correct choice will
depend on your application.
I'd like to hear about those performance problems... If someone can't
make the standard calls efficient, they should leave the business and
give their customers Linux or BSD.
And if it's not Unix, what's a prop. workstation OS?


They've mostly died out since then. At the time, there were things
like Domain and VMS.


Never heard of Domain, but VMS is called NT/2000/XP/2003/Vista now (with
some enhancements and a new GUI). ;)
Of course, considering the state of most of the HTML on the web, I
have *no* idea why most of them are doing this.

Yep. Maybe it would be best to reengineer the whole thing as ONE UI
spec+action language, incompatible with the current mess, compact, so
it can be implemented with minimum fuss. And most of all, I wouldn't
use a MARKUP language, as a real application is not text-based (at
least not as characteristic #1).


You mean most of the applications I run aren't real applications?
Right now, my desktop has exactly two GUI applications open on it - a
mixer and gkrellm. Everything else is characeter based. Hell, even my
window manager is character based.


I meant not using text elements. Of course it includes text, in your
case predominantly. But even most curses clients have other elements
sometimes, like links. A standard spec language could cater easily for
text clients, but a text language like HTML has a harder time to cater
for good GUI clients. Most apps I use have buttons and menus that I
wouldn't want to express with markup (and web pages that try to do that
most invariably suck).
I think you're right - a web standard designed for writing real
applications probably wouldn't start life as a markup for text. The
only thing I can think of that even tries is Flash, but it's
proprietary so I don't know much about it.
Java has been mentioned in the other response, but there's also all
other kinds of application frameworks. Only XUL is markup based, with
the effect that there's almost no text at all between the markup tags I
guess ;)
Care to tell me how you would design such a format if the goal were to
*not* lose any portability - which means it has to be possible to
design interfaces that work properly on character devices, things like
Palms three-color greyscale displays, and devices without pointers or
without keyboards, or even in an audio-only environment.


Colors can be sampled down. Even the new Enlightenment libs do that
(they say). For mapping a GUI client to a text client, ok, tough. Face
it, lots of things just can't be expressed in pure text. Images, PDF
viewing, video, simulation with graphical representations...

Pointers could be added to any kind of machine, and even without it, you
could give it a gameboy-style controller for cursor movement (i.e. arrow
keys).

I'm just not talking about a language for audio- and text-mode clients ;)

--
My ideal for the future is to develop a filesystem remote interface (a
la Plan 9) and then have it implemented across the Internet as the
standard rather than HTML. That would be ultimate cool.
Ken Thompson
Aug 28 '05 #144
Ulrich Hobelmann <u.*********@web.de> writes:
Mike Meyer wrote:
I'd rather develop a native client for the machine that people
actually WANT to use, instead of forcing them to use that
little-fiddly web browser on a teeny tiny display. You missed the point: How are you going to provide native clients for
platforms you've never heard of?

Who says I have to? With open protocols, everybody can. I know many
platforms that STILL don't have a browser that would work with most
websites out there. They all have NNTP, SMTP and POP
clients. Text-mode, GUI-mode, your choice.


The people who are distributing applications via the web. You want to
convince them to quit using web technologies, you have to provide
something that can do the job that they do.
And again: connections might be slow, a compact protocol is better
than loading the whole UI every time. And while Ajax might work,
despite the UI being maybe too big for the little browser window, and
even if it works, it's still probably more work than a simple, native
UI. First of all it needs to load all the JS on first load, secondly
sometimes for a flexible UI you'd have to replace huge parts of the
page with something else. Native UIs are more up to the task.

I'm not arguing that native UI's aren't better. I'm arguing that web
applications provide more portability - which is important for some
applications and some developers.

Like Java provides more portability. Unless you ran NetBSD in 2003
(there was no Java back then that worked for me), hm, IRIX?, Plan9,
BeOS the list goes on... LOTS of platforms don't have the manpower to
develop a client that renders all of the huge bloated wagonload of W3C
tech that was only designed for *markup* from the beginning.


I'm still waiting for an answer to that one - where's the Java toolkit
that handles full-featured GUIs as well as character cell
interfaces. Without that, you aren't doing the job that the web
technologies do.
I just wonder who wrote fully standards compliant web browsers for
those 90 platforms.

Nobody. I doubt there's a fully standards compliant web browser

Nobody, huh? Then how could you run just ANY web application on those
platforms?


The same way you write POSIX applications in the face of buggy
implementations - by working around the bugs in the working part of
the implementation, and using conditional code where that makes a
serious difference.
available for *any* platform, much less any non-trivial collection of
them. You write portable web applications to the standards, and design
them to degrade gracefully. Then you go back and work around any new

Oh right, they degrade gracefully. So without Javascript or cookies
(the former is often not implemented) you get a HTML page with an
error notice -- if you're lucky.


You left off the important part of what I had to say - that the
application be written by a moderately competent web author.
A server AND client for a simple protocol designed for its task
(i.e. not FTP for instance) can be implemented in much less work than
even designing even part of a web application backend that does that
kind of stuff.
Well, if it that easy (and web applications are dead simple), it
should be done fairly frequently. Care to provide an example?
You think you're done. A lot of developers think you can stop with
the
first one or two. You're all right for some applications. For others,
you're not. Personally, I like applications that run on all the
platforms I use - and your set doesn't cover all three of those
systems.

Ok, I'd be interested to hear what those are. VMS, RiscOS, Mac OS 9...?


FreeBSD, OS X and a Palm Vx.
If a system's scheduler, or select implementation sucks, though, I'd
complain to the vendor or simply abandon the platform for
another. Competition is good :)
Complaining to the vendor doesn't always get the bug fixed. And
refusing to support a platform isn't always an option. Sometimes, you
have to byte the bullet and work around the bug on that platform.
same thing applies to threads, except such code typically includes a
third option of not using threads at all. And so on.

Well, who doesn't do threads after several years of POSIX IMHO can't
be taken seriously. Ok, the BSDs didn't until recently, but those are
volunteer projects.


Not all platforms are POSIX. If you're ok limiting your application to
a small subset of the total number of platforms available, then
there's no advantage to using web technologies. Some of us aren't
satisifed with that, though.
And we haven't even started talking about the build process...

If the libraries are installed, just build and link it (if you use
standard C, POSIX + 3rd party libs that do the same). If not, then
tough luck -- it couldn't even run in theory then.


You have to have the right build tool installed. Since you use BSD,
you've surely run into typing "make" only to have it blow up because
it expects gmake.
Of course, considering the state of most of the HTML on the web, I
have *no* idea why most of them are doing this.
Yep. Maybe it would be best to reengineer the whole thing as ONE UI
spec+action language, incompatible with the current mess, compact, so
it can be implemented with minimum fuss. And most of all, I wouldn't
use a MARKUP language, as a real application is not text-based (at
least not as characteristic #1).

You mean most of the applications I run aren't real applications?
Right now, my desktop has exactly two GUI applications open on it - a
mixer and gkrellm. Everything else is characeter based. Hell, even my
window manager is character based.

I meant not using text elements. Of course it includes text, in your
case predominantly. But even most curses clients have other elements
sometimes, like links. A standard spec language could cater easily
for text clients, but a text language like HTML has a harder time to
cater for good GUI clients. Most apps I use have buttons and menus
that I wouldn't want to express with markup (and web pages that try to
do that most invariably suck).


Well, marking up text is a pretty poor way to describe a UI - but
anything that is going to replace web technologies has to have a
media-independent way to describe the UI. One of the things that made
the web take off early was that anyone with a text editor could create
web pages. I think that's an important property to keep - you want the
tools that people use to create applications be as portable/flexible
as the applications. Since most GUI's are written in some programming
language or another, and most programming langauges are still flat
text, a GUI description as flat text exists for most GUIs, so this
requirement isn't a handicap.
I think you're right - a web standard designed for writing real
applications probably wouldn't start life as a markup for text. The
only thing I can think of that even tries is Flash, but it's
proprietary so I don't know much about it.

Java has been mentioned in the other response, but there's also all
other kinds of application frameworks. Only XUL is markup based, with
the effect that there's almost no text at all between the markup tags
I guess ;)


You don't have to guess - finding examples of XUL isn't hard at all. I
think XML gets used in a lot of places where it isn't appropriate. One
of the few places where it is appropriate is where you want a file
format that lots of independent implementations are going to be
reading. This could well be one of those times.
Care to tell me how you would design such a format if the goal were to
*not* lose any portability - which means it has to be possible to
design interfaces that work properly on character devices, things like
Palms three-color greyscale displays, and devices without pointers or
without keyboards, or even in an audio-only environment.

Colors can be sampled down. Even the new Enlightenment libs do that
(they say). For mapping a GUI client to a text client, ok, tough.
Face it, lots of things just can't be expressed in pure text. Images,
PDF viewing, video, simulation with graphical representations...


Applications aren't one of those things. Even applications that work
with those things don't need GUI interfaces.
Pointers could be added to any kind of machine, and even without it,
you could give it a gameboy-style controller for cursor movement
(i.e. arrow keys).
Yeah, if you're willing to tell your potential users "Go out and buy
more hardware". If you're Microsoft, you probably do that with the
addendum "from us". Not being Microsoft or a control freak, I want
applications that work with whatever the users already have.
I'm just not talking about a language for audio- and text-mode clients ;)


Then you're not talking about replacing HTML et. al.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Aug 28 '05 #145
Mike Meyer wrote:
I'm still waiting for an answer to that one - where's the Java toolkit
that handles full-featured GUIs as well as character cell
interfaces. Without that, you aren't doing the job that the web
technologies do.
Where is the text-mode browser that would even run part of the web apps
I use, like home-banking, all web forums, server configuration
interfaces, etc.? I think we should leave both these questions open.
(In fact, as to UIs using Java, blech! Haven't seen a really good one...)
I just wonder who wrote fully standards compliant web browsers for
those 90 platforms.
Nobody. I doubt there's a fully standards compliant web browser

Nobody, huh? Then how could you run just ANY web application on those
platforms?


The same way you write POSIX applications in the face of buggy
implementations - by working around the bugs in the working part of
the implementation, and using conditional code where that makes a
serious difference.


But as soon as some user of platform 54 tries your website, she'll
encounter some weird behavior without even knowing why. And maybe so
will you, especially if you don't have that platform there for testing.
I don't understand how this web thing changes anything... With POSIX
at least you have a real bug-report for the guy responsible for it. If
a platform keeps being buggy, with no fixes coming, screw them. Every
user will see that sooner or later, and these platforms die. Even
Windows is quite stable/reliable after 10+ years NT!
available for *any* platform, much less any non-trivial collection of
them. You write portable web applications to the standards, and design
them to degrade gracefully. Then you go back and work around any new

Oh right, they degrade gracefully. So without Javascript or cookies
(the former is often not implemented) you get a HTML page with an
error notice -- if you're lucky.


You left off the important part of what I had to say - that the
application be written by a moderately competent web author.


But if you can cater for all kinds of sub-platforms, then why not just
provide a CLI as well as those GUI interfaces, when we're duplicating
work to begin with? ;)

If it doesn't run without JS, then you lock out 90% of all alive
platforms (and maybe 1% of all alive users :D) anyway.
A server AND client for a simple protocol designed for its task
(i.e. not FTP for instance) can be implemented in much less work than
even designing even part of a web application backend that does that
kind of stuff.


Well, if it that easy (and web applications are dead simple), it
should be done fairly frequently. Care to provide an example?


We have all the web standards, with various extensions over the years.
Some FTP clients even don't crash if they see that some server doesn't
yet support the extension from RFC XY1234$!@. Then there's tons of
inter-application traffic in XML already, growing fast. Then there are
s-expressions (Lisp XML if you want). Then probably thousands of ad-hoc
line-based text protocols, but I don't know how well they can be
extended. There's CORBA. Most web standards are simple, at least if
you would subtract the weird stuff (and IMHO there should be new
versions of everything with the crap removed). XML is somewhat simple,
just hook libxml.

There's NNTP. There's RSS. There's Atom. The latter two emerged quite
painlessly, even though you could maybe use some website for what they
provide. But this way you have lots of clients for lots of platforms
already.
You think you're done. A lot of developers think you can stop with
the
first one or two. You're all right for some applications. For others,
you're not. Personally, I like applications that run on all the
platforms I use - and your set doesn't cover all three of those
systems.

Ok, I'd be interested to hear what those are. VMS, RiscOS, Mac OS 9...?


FreeBSD, OS X and a Palm Vx.


Didn't I say, a GUI for the Mac, for X11, and Windows? That only leaves
out the Palm. I heard they aren't too hard to program for, either. But
I haven't heard of a really decent browser for pre-OS5 PalmOS (not sure
about OS5).
If a system's scheduler, or select implementation sucks, though, I'd
complain to the vendor or simply abandon the platform for
another. Competition is good :)


Complaining to the vendor doesn't always get the bug fixed. And
refusing to support a platform isn't always an option. Sometimes, you
have to byte the bullet and work around the bug on that platform.


Sure, but you can tell your customers that unfortunately their system
vendor refuses to fix a bug and ask THEM to ask that vendor. Boy, will
they consider another platform in the future, where bugs do get fixed ;)
same thing applies to threads, except such code typically includes a
third option of not using threads at all. And so on.

Well, who doesn't do threads after several years of POSIX IMHO can't
be taken seriously. Ok, the BSDs didn't until recently, but those are
volunteer projects.


Not all platforms are POSIX. If you're ok limiting your application to
a small subset of the total number of platforms available, then
there's no advantage to using web technologies. Some of us aren't
satisifed with that, though.


Sure. You have to look where your users are. Chances are that with
obscure systems they can't use most web-apps either.
And we haven't even started talking about the build process...

If the libraries are installed, just build and link it (if you use
standard C, POSIX + 3rd party libs that do the same). If not, then
tough luck -- it couldn't even run in theory then.


You have to have the right build tool installed. Since you use BSD,
you've surely run into typing "make" only to have it blow up because
it expects gmake.


With 3rd party-stuff, yes. The little that I've written yet compiled
immediately (with pmake and gmake), except for C99 (the FreeBSD I tried
had gcc 2.95 installed). But now I write all C as pre-99 anyway, looks
cleaner, IMHO.
Well, marking up text is a pretty poor way to describe a UI - but
anything that is going to replace web technologies has to have a
media-independent way to describe the UI. One of the things that made
the web take off early was that anyone with a text editor could create
web pages. I think that's an important property to keep - you want the
tools that people use to create applications be as portable/flexible
as the applications. Since most GUI's are written in some programming
language or another, and most programming langauges are still flat
text, a GUI description as flat text exists for most GUIs, so this
requirement isn't a handicap.
That's true, though I think the future of development lies in overcoming
that program-code-as-text thing (NOT visual programming, just
tool-based, structured). Smalltalk did it decades ago.
You don't have to guess - finding examples of XUL isn't hard at all. I
think XML gets used in a lot of places where it isn't appropriate. One
of the few places where it is appropriate is where you want a file
format that lots of independent implementations are going to be
reading. This could well be one of those times.


Maybe, but for applications that aren't predominantly concerned about
text, I'd really rather use a structured data type (like s-expressions),
not text markup like XML. For hypertext, XHTML is fine, though, if a
bit verbose.

[follow-up set to comp.unix.programmer]

(I just noticed I replaced my sig with something web-related yesterday.
This is pure coincidence :D)

--
My ideal for the future is to develop a filesystem remote interface
(a la Plan 9) and then have it implemented across the Internet as
the standard rather than HTML. That would be ultimate cool.
Ken Thompson
Aug 29 '05 #146
In comp.lang.perl.misc Mike Meyer <mw*@mired.org> wrote:
ax**@white-eagle.invalid.uk writes:
In comp.lang.perl.misc John Bokma <jo**@castleamber.com> wrote:
Chris Head <ch*******@hotmail.com> wrote:
What advantages would those be (other than access from 'net cafes, but
see below)?
And workplaces. Some people have more then one computer in the house. My
partner can check her email when I had her over the computer. When I
want to check my email when she is using it, I have to change the
session, fire up Thunderbird (which eats away 20M), and change the
session back. Not a Windows solution, but I find the 'screen' utility invaluable as
I can have my email, news, and an editor open in different screens
and then when I need to move to a different machine, I can simply
detach and reattach screen without disturbing anything that
might be running.

For a more portable solution, check out VNC.


I know... but it is a bugger to set up and I believe it is no longer
freeware (if it ever was), and it does not have the stark simplicity
which screen has... I only need to have a compiled version of screen
on the machine on which I do most of my work and be able to ssh/telnet
to that machine without involving any additional software installations
on other machines.

Axel
Aug 29 '05 #147
On Sat, 27 Aug 2005 14:35:05 -0400, Mike Meyer <mw*@mired.org> wrote:
Ulrich Hobelmann <u.*********@web.de> writes:
Mike Meyer


I wonder could you guys stop cross-posting this stuff to
comp.lang.perl.misc? The person who started this thread - a
well-known troll - saw fit to post it there, and now all your posts
are going there too.
--

Henry Law <>< Manchester, England
Aug 29 '05 #148

John Bokma wrote:
"T Beck" <Tr********@Infineon.com> wrote:
[snip] alongside of it. The internet is a free-flowing evolving place... to
try to protect one little segment like usenet from ever evolving is
just ensuring it's slow death, IMHO.


And if so, who cares? As long as people hang out on Usenet it will stay.
Does Usenet need al those extra gimmicks? To me, it would be nice if a
small set would be available. But need? No.

The death of Usenet has been predicted for ages. And I see only more and
more groups, and maybe more and more people on it.

As long as people who have to say something sensible keep using it, it
will stay.

I suppose I was (as many people on the internet have a bad habit of
doing) being more caustic than was strictly necessary. I don't really
forsee the death of usenet anytime soon, I just don't think the idea of
it evolving is necessarily bad. I don't really have alot of vested
interest one way or the other, to be honest, and I'm perfectly happy
with the way it is.

I just think it's a naive view to presume it never will change, because
change is what the internet as a whole was built on.

I think I'll calmly butt out now ^_^

-- T Beck

Aug 29 '05 #149
Why on earth was this cross-posted to comp.lang.c.? Followups set.

On Mon, 29 Aug 2005 12:26:11 GMT, ax**@white-eagle.invalid.uk wrote:
In comp.lang.perl.misc Mike Meyer <mw*@mired.org> wrote:
ax**@white-eagle.invalid.uk writes:
In comp.lang.perl.misc John Bokma <jo**@castleamber.com> wrote:
Chris Head <ch*******@hotmail.com> wrote:
> What advantages would those be (other than access from 'net cafes, but
> see below)?
And workplaces. Some people have more then one computer in the house. My
partner can check her email when I had her over the computer. When I
want to check my email when she is using it, I have to change the
session, fire up Thunderbird (which eats away 20M), and change the
session back.
Not a Windows solution, but I find the 'screen' utility invaluable as
I can have my email, news, and an editor open in different screens
and then when I need to move to a different machine, I can simply
detach and reattach screen without disturbing anything that
might be running.

For a more portable solution, check out VNC.


I know... but it is a bugger to set up and I believe it is no longer
freeware (if it ever was), and it does not have the stark simplicity
which screen has... I only need to have a compiled version of screen
on the machine on which I do most of my work and be able to ssh/telnet
to that machine without involving any additional software installations
on other machines.

Axel

--
Al Balmer
Balmer Consulting
re************************@att.net
Aug 29 '05 #150

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

336
by: Xah Lee | last post by:
Jargons of Info Tech industry (A Love of Jargons) Xah Lee, 2002 Feb People in the computing field like to spur the use of spurious jargons. The less educated they are, the more they like...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.