By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,496 Members | 1,517 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,496 IT Pros & Developers. It's quick & easy.

how to make a link to an old page go to a new page without displaying anything

P: n/a
ted
I have an old link that was widely distributed. I would now like to
put a link on that old
page that will go to a new page without displaying anything.

Aug 20 '07 #1
Share this Question
Share on Google+
38 Replies


P: n/a
On Mon, 20 Aug 2007 09:08:51 +0200, <te*@shapin.orgwrote:
I have an old link that was widely distributed. I would now like to
put a link on that old
page that will go to a new page without displaying anything.
Assuming your "old" link is deprecated and shouldn't be used anymore (i.e.
people ought to update their bookmarks and their links to the new target),
you should use a 301 Moved Permanently HTTP redirect.

See:
http://www.ietf.org/rfc/rfc2616.txt
http://www.w3.org/DesignIssues/UserAgent.html

Please, do not use the http-equiv=refresh meta, it "breaks the back
button".
http://www.w3.org/QA/Tips/reback

If your content management system is good, you should be able to setup a
301 redirection.
http://www.w3.org/TR/chips/#gl2

If you use an Apache server:
http://www.mcanerin.com/EN/articles/...ect-apache.asp

Simply add a "Redirect 301 /old/old.html http://domain/new/new.html" line
in a .htaccess file.

If you use an ASP server, PHP, a CGI script or ColdFusion, it should be
easy too:
http://www.somacon.com/p145.php
A few others at:
http://www.webconfs.com/how-to-redirect-a-webpage.php

If you still don't find your management system there, then, use google:
http://www.google.com/search?hl=en&l...P+301+redirect

If google doesn't help, contact the authority responsible of your content
mangement system.

Note: Choose a good scheme for your URI, so that they're not likely to
change:
http://www.w3.org/Provider/Style/URI

I hope it helps.
--
You can contact me at <ta********@yahoo.fr>
Aug 20 '07 #2

P: n/a
On 8/20/2007 12:08 AM, te*@shapin.org wrote:
I have an old link that was widely distributed. I would now like to
put a link on that old
page that will go to a new page without displaying anything.
Actually you should display a warning page. That way, visitors who have
set bookmarks (in IE, "favorites") or who have links in their own Web
pages can update them.

I generally replace the old page with a warning page for about 3 months.
The warning page tells the user that the URI has changed and gives the
new URI as a link. In case the user has dozed off or gone to get a
snack, I do not use REFRESH to reach the new page from the warning page.

As an alternative, you can set a "soft-link" on your Web server. This
causes any reference to the old file-path to become a reference to the
new file path. For this, you must have a shell account on your sever so
that you can establish a Telnet or similar session.

In the directory that used to hold the old file, execute the command:
ln -s newfilepath oldfile
where "newfilepath" is the name and location of the new file relative to
the old file's directory and "oldfile" is the complete name (with
extension) of the old file. Thus, if I had an old page
<http://www.rossde.com/foo/work.htmland replaced it with
<http://www.rossde.com/bar/fun.html>, the command would be executed in
the /foo directory and would be
ln -s ../bar/fun.html work.html
Note that I use this only for temporary changes in URIs because I want
users to be able to update their bookmarks. I also use it where I
changed a graphic file and don't want to update all the HTML files that
reference it (e.g., replacing a PNG file with a GIF file).
--
David E. Ross
<http://www.rossde.com/>

Natural foods can be harmful: Look at all the
people who die of natural causes.
Aug 20 '07 #3

P: n/a
Scripsit David E. Ross:
On 8/20/2007 12:08 AM, te*@shapin.org wrote:
>I have an old link that was widely distributed. I would now like to
put a link on that old
page that will go to a new page without displaying anything.

Actually you should display a warning page.
Hardly. Sounds like splash page. And it is.
That way, visitors who
have set bookmarks (in IE, "favorites") or who have links in their
own Web pages can update them.
How many people care about such things? Most people just want to see the
content.

If you care about such issues, you should include a note in the actual page
content, in its new address. Something like "This page now (since [insert
data]) resides at a new address. You may wish to consider updating it in
your bookmarks (favorites)." But even that is a bit naive.
I generally replace the old page with a warning page for about 3
months.
If you do that, search engines will see that the page content has radically
changed and is now minimal. What will they do? Well, they _may_ check the
link and consider indexing the linked page, as a new page. Or maybe as a new
address for an old page if they do some heuristics, but will they?

Instead, if you use HTTP redirect, as already described in a very good reply
here, search engines will notify that the address has changed and act
accordingly.

--
Jukka K. Korpela ("Yucca")
http://www.cs.tut.fi/~jkorpela/

Aug 20 '07 #4

P: n/a
On 20 Aug, 16:34, "David E. Ross" <nob...@nowhere.notwrote:
Actually you should display a warning page.
Not for a 301. It's gone, just deal with it.

For a 302 there's some justification to temporary notice of roadworks.

Aug 20 '07 #5

P: n/a

On Mon, 20 Aug 2007 17:34:10 +0200, David E. Ross <no****@nowhere.not>
wrote:
I also use it where I
changed a graphic file and don't want to update all the HTML files that
reference it (e.g., replacing a PNG file with a GIF file).
Advice (not to be taken as offensive, but only as advice): Don't put file
extensions in URI. They're likely to change, and are usually irrelevant at
describing the resource identified by the URI.

HTTP provides a powerful content negotiation. The same resource,
identified by the same URI, can be represented in the best format
supported by the user agent. The same URI can be used either for a gif,
for a png, or for *both*.

One may think that this could be a problem with user agents that don't
have a good support for HTTP. Actually, content negotiation of gif/png
images for URI without file extension works with Netscape Navigator 1. No
problem on that side.

Other guidelines for designing good URI are available in the Tim
Berners-Lee article I previously cited:
http://www.w3.org/Provider/Style/URI

--
You can contact me at <ta********@yahoo.fr>
Aug 20 '07 #6

P: n/a
Stefan Ram wrote:
"André Gillibert" <ta*******@yahoo.frwrites:
>Don't put file extensions in URI. They're likely to change,

An URI cannot contain a file extension. In

http://example.com/example.htm

, the ».htm« is not a file extension,
just a part of the final path segment.

When the resource type was HTML and then becomes PDF,
there is no reason for this URI to change.
You mean, if we ignore the fact that the server will generally be
looking for a file called example.htm that no longer exists and won't
know to send, instead, the new file generated with Acrobat Distiller
that's called example.pdf?
Aug 20 '07 #7

P: n/a
On 8/20/2007 10:54 AM, André Gillibert wrote [in part]:
On Mon, 20 Aug 2007 17:34:10 +0200, David E. Ross <no****@nowhere.not>
wrote:
>I also use it where I
changed a graphic file and don't want to update all the HTML files that
reference it (e.g., replacing a PNG file with a GIF file).

Advice (not to be taken as offensive, but only as advice): Don't put file
extensions in URI. They're likely to change, and are usually irrelevant at
describing the resource identified by the URI.
I include the extensions so that I can do preliminary testing of pages
and their links to other pages in the site from my PC before I upload
the files. In this case, I'm not using a server; I'm using WindowsXP to
supply the referenced files. I have my complete Web site (actually, two
different sites) completely structured on my PC exactly as it is
structured on the server. The only difference is that my home page is
at <file:///D:/Web/myWeb/VCnet/index.htmlon my PC instead of
<http://www.rossde.com/(or <file:///D:/Web/CFOPWeb/index.htmlinstead
of <http://www.oakparkfoundation.org/>).

Of course, I could remove the extensions after my local testing.
However, experience has taught me that such modifications then require
addtional testing.

--
David E. Ross
<http://www.rossde.com/>

Natural foods can be harmful: Look at all the
people who die of natural causes.
Aug 20 '07 #8

P: n/a
On Mon, 20 Aug 2007 23:24:00 +0200, Stefan Ram <ra*@zedat.fu-berlin.de>
wrote:
ra*@zedat.fu-berlin.de (Stefan Ram) writes:
>I admit that I also feel fooled when I activate a download
link ending in ».zip« and are being led to an HTML resource
(I believe I saw this at sourceforge).

One could even »download to ...« the resource and then believe
having securily stored the ZIP resource. Possibly learning
only much later, that not a ZIP but an HTML resource was stored.
The web browser I use (Opera), on systems where file extensions are used
to identify file formats (such as Microsoft Windows systems), ignore the
last part of the URI (following the .), and compute the file extension
from the MIME type.

I guess that on systems where other means (e.g. file headers) are used to
identify the format or files, it would entirely remove the .zip, and
wouldn't append any file extension.

One could argue that this behavior is bogous, as there's no reason to
remove the last few characters of an opaque URI (in that case on Windows
systems, the local file name should be something.zip.html). On the other
hand, since these last few characters ought to be omitted in the first
place, it "fixes" bogous URI, most of the time.

Anyway, most user agents have a different behavior.

--
You can contact me at <ta********@yahoo.fr>
Aug 20 '07 #9

P: n/a
On 8/20/2007 11:58 AM, David E. Ross wrote:
On 8/20/2007 10:54 AM, André Gillibert wrote [in part]:
>On Mon, 20 Aug 2007 17:34:10 +0200, David E. Ross <no****@nowhere.not>
wrote:
>>I also use it where I
changed a graphic file and don't want to update all the HTML files that
reference it (e.g., replacing a PNG file with a GIF file).
Advice (not to be taken as offensive, but only as advice): Don't put file
extensions in URI. They're likely to change, and are usually irrelevant at
describing the resource identified by the URI.

I include the extensions so that I can do preliminary testing of pages
and their links to other pages in the site from my PC before I upload
the files. In this case, I'm not using a server; I'm using WindowsXP to
supply the referenced files. I have my complete Web site (actually, two
different sites) completely structured on my PC exactly as it is
structured on the server. The only difference is that my home page is
at <file:///D:/Web/myWeb/VCnet/index.htmlon my PC instead of
<http://www.rossde.com/(or <file:///D:/Web/CFOPWeb/index.htmlinstead
of <http://www.oakparkfoundation.org/>).

Of course, I could remove the extensions after my local testing.
However, experience has taught me that such modifications then require
addtional testing.
I almost forgot additional reasons to include extensions.

Extensions are not meaningless to me when I maintain my Web site even if
they might not mean anything to my Web server. I give my files mnemonic
names. For example, I have euro.html, which discusses the symbol for
the euro currency; and I have euro.gif, which displays that symbol.
Since I cannot have two files in the same directory with the same name,
the extensions make the names different. In any case, even when I do
not have such potentially conflicting names, I can tell immediately when
I view a directory which files contain HTML and which files contain
images (and what kinds of images: icon, GIF, or JPEG). This helps me to
determine which editor to use on a file.

Another use of extensions is in the index I display at
<http://www.rossde.com/get_index.html>. This uses an SSI script that
involves the UNIX command
ls -ld <directoryname>/*.html
for each directory in my site. Thus, instead of displaying over 900
files, the script displays the approximately 350 files that contain
HTML. (Yes, I know I could use
ls -ld <directoryname>/* | grep .html
but that would still require the use of the .html extension on my HTML
files. Other methods that would actually examine the contents of the
files to determine which contain HTML would be too complicated and would
execute too slowly.)

In any case, the average user merely notices underlined, colored text
and recognizes it as a link. Users rarely examine the actual URI.
Thus, I don't understand why it is such a big deal whether I name an
HTML file euro.html, euro_html, or eurohtml. To the Web server, these
are all merely file names.

(I do examine the actual URI -- displayed when my cursor is over the
link -- but only when I'm about to do tabbed browsing. I do that to
ensure the link is not JavaScript, for which SeaMonkey will not open a
new tab unless the script attempts to force a new window.)

--
David E. Ross
<http://www.rossde.com/>

Natural foods can be harmful: Look at all the
people who die of natural causes.
Aug 21 '07 #10

P: n/a
On Tue, 21 Aug 2007 02:17:56 +0200, David E. Ross <no****@nowhere.not>
wrote:
On 8/20/2007 11:58 AM, David E. Ross wrote:
>>
I include the extensions so that I can do preliminary testing of pages
and their links to other pages in the site from my PC before I upload
the files. In this case, I'm not using a server; I'm using WindowsXP to
supply the referenced files. I have my complete Web site (actually, two
different sites) completely structured on my PC exactly as it is
structured on the server. The only difference is that my home page is
at <file:///D:/Web/myWeb/VCnet/index.htmlon my PC instead of
<http://www.rossde.com/(or <file:///D:/Web/CFOPWeb/index.htmlinstead
of <http://www.oakparkfoundation.org/>).
That make sense, but, in my opinion, such authoring details shouldn't
affect the URI.
You should either find an existing utility, or program one, that changes
all the relative URI to other local files, inside html files, and replace
them with URI lacking the extension.
If your Windows XP server cannot serve files without their extensions
and/or don't provide a correct content negotiation system (e.g. you cannot
have a gif and a png, or a text/plain and a text/html, served under the
same URI), then, change your server. I don't want broken tools to break
the Web. I understand that broken authoring tools have as much (or even
more) responsibility as authors in the infamous 404 reply. So, please, use
good tools.
>Of course, I could remove the extensions after my local testing.
However, experience has taught me that such modifications then require
addtional testing.
Yes, even with a utility, you would have to test. A site has to be tested,
anyway.
This is part of the work of the webmaster.
>
I almost forgot additional reasons to include extensions.

Extensions are not meaningless to me when I maintain my Web site even if
they might not mean anything to my Web server. I give my files mnemonic
names. For example, I have euro.html, which discusses the symbol for
the euro currency; and I have euro.gif, which displays that symbol.
If euro.html discusses the symbol, name it euro-discussion or about-euro,
and if euro.gif graphically represents an euro sysmbol, well, keep the
name euro (without .gif) or give it the name euro-image or
this-is-not-an-euro :).

<snip other real-life reasons to keep file extensions when authoring>

This may be helpful during your work but shouldn't appear in the published
work. Your Web server shouldn't be a strict mirror of your local files.
Ideally, you should think about the overall design of your web site from a
user point of view, before using any tool.
Yes. That is actually *work*. But it results in higher quality sites.
In any case, the average user merely notices underlined, colored text
and recognizes it as a link. Users rarely examine the actual URI.
That depends on the user. ;)
Anyway, that's not that much relevant.
Thus, I don't understand why it is such a big deal whether I name an
HTML file euro.html, euro_html, or eurohtml. To the Web server, these
are all merely file names.
It matters because, even if the user don't examine the actual URI, he may:
1) Bookmark it.
2) Link it from his own web site, from a weblog, from a web BBS, from a
Usenet newsgroup, from a paper book, from an online news site or from a
physical newspaper, from an online or offline revue article.
3) Share it with his friends, by phone, by e-mail, by paper mail, by
mouth-to-hear communication, or by any existing medium.

Consequently, a link may be stored on every existing medium (the whole
point of URI is that they're short ASCII character strings that can be
transfered and stored anywhere).

Then, when you change the structure of your site.
1) Either you keep old links indefinitely. In that case, the lack of
abstraction of your URI which once simplified your work will become a
nightmare at version 3 or 4 of your web site.
2) Or you break old links without warning. In that case, all the links
will be immediately broken.
3) Or you provide 301 HTTP redirections during a limited time (6 months? 1
year? 2 year?) to let people "update their bookmarks and links".
That solves the problem of people who bookmarked your website and go to it
often.

1) But that doesn't solve the problem of people who bookmarked your
website, and won't come back before 3 years. I, and probably many other
people, use bookmarks as a sort of persistent history of good sites (if
your site is good, than, make URI persistent).
2) On a standard Usenet post, links are never ever updated, because Usenet
doesn't provide any real editing facility. Even if these posts aren't
anymore stored on NNTP servers, they're stored on archive servers (e.g.
google groups). For example, I won't ever update the links I posted on
this thread. I won't because I couldn't. Anyway, that would be too much
work for a few posts I wrote quickly. If a day, somebody find it on google
groups, he won't agree with me... He may really want to follow these links.
3) Everything on physical paper is (almost) never updated. The infamous
404 error is almost *guaranteed* when reaching a website from an old
physical newspaper, an old book (did you know that books live more than 3
years?), or the piece of paper or letter a friend gave.
4) web BBS share the same problem than Usenet: Nobody will ever update the
link of an old post, unless somebody report the broken link and the
administrator is very very nice.
5) Other personal web sites. Links may be updated if the site is still
actively maintained. Unmaintained or badly maintained sites are a very
large part of the Web.
6) Weblog. Even maintained weblogs have so many entries that most weblog
posters won't update links. There *may* be very good administrator who use
specific tools to immediately warn them whenever an URI is broken on any
weblog entry. It could even be possible to automatically update the link
target when a 301 HTTP reply is met. With all the two-cents CMS that most
people use, I think it's done for a tiny minority of weblogs.
7) Professional websites with static content. They should update their
links, but I find that most of them don't do that.
8) Professional websites with dynamic content, in the form of articles.
Usually, the maintainance of old articles is pretty bad. Some links may
have to be updated by the article author, which means that it won't be
done if the article is more than 3 years old unless the article author is
a very good maintainer.

You see that very few links are updated. Updated links are a myth.

If your site is not popular at all, you will only break the zero, one or
two links pointing to your website (and the popularity of your website
will stagnate).
But, if your site is popular or has a raising popularity, breaking links
is a very bad idea.

I know that this post is long and most of its contents are "obvious". But
I wanted to break the myth saying "people will update their
links/bookmarks".

--
You can contact me at <ta********@yahoo.fr>
Aug 21 '07 #11

P: n/a
Well bust mah britches and call me cheeky, on Wed, 22 Aug 2007 04:19:56
GMT Ed Mullen scribed:

"Disgruntled?" If I'm not, then I guess I'm "gruntled." What the heck
is "gruntled?"

You weren't in the Army, were you? Means led by grunts and liking it.

--
Neredbojias
Half lies are worth twice as much as whole lies.
Aug 22 '07 #12

P: n/a
Ed Mullen wrote:
And ... enjoying concurrence with all David said ... I maintain several
Gigabytes of Web space quite successfully with the model we're
describing. I'm not about to fix what ain't broke
How do you know it ain't broke?

I don't know if you surf on the web. Personally, I see that everything
that's more than 5 years old is full of broken links. The Web is very
volatile as it currently is. Bad URI design is one of the major reasons.

archive.org is the only hope to see something old, but that doesn't work
for sites that hadn't been well crawled.

The fact that many dudes do the same thing than you, is not an argument in
favor of doing so.
We're not supposed to be sheeps of Panurge.

--
You can contact me at <ta*****************@yahoDELETETHATo.fr>
Aug 22 '07 #13

P: n/a
Tue, 21 Aug 2007 20:07:15 -0700 from David E. Ross
<no****@nowhere.not>:
As Ed Mullen pointed out, I'm not interested in having a server on my
PC. I replicate my Web site on my PC (1) to test pages before uploading
them and (2) as a backup in case my ISP's Web server crashes.
I bet most of us replicate our sites on our local PCs for backup and
testing.

But that does mean having a server, though not necessarily a public
server. A browser doesn't know what to do with a link like "./" or
"./math/" -- it needs a server to tell it to translate that into
index.htm or default.html or whatever.

As others have posted, setting up Apache is pretty simple. I'd give
details, but I did it two and a half years ago and can' remember what
I did. :-) I run it only when I'm testing my local copy of my site.

--
Stan Brown, Oak Road Systems, Tompkins County, New York, USA
http://OakRoadSystems.com/
HTML 4.01 spec: http://www.w3.org/TR/html401/
validator: http://validator.w3.org/
CSS 2.1 spec: http://www.w3.org/TR/CSS21/
validator: http://jigsaw.w3.org/css-validator/
Why We Won't Help You:
http://diveintomark.org/archives/200..._wont_help_you
Aug 22 '07 #14

P: n/a
On 8/22/2007 4:20 AM, Stan Brown wrote:
Tue, 21 Aug 2007 20:07:15 -0700 from David E. Ross
<no****@nowhere.not>:
>As Ed Mullen pointed out, I'm not interested in having a server on my
PC. I replicate my Web site on my PC (1) to test pages before uploading
them and (2) as a backup in case my ISP's Web server crashes.

I bet most of us replicate our sites on our local PCs for backup and
testing.

But that does mean having a server, though not necessarily a public
server. A browser doesn't know what to do with a link like "./" or
"./math/" -- it needs a server to tell it to translate that into
index.htm or default.html or whatever.

As others have posted, setting up Apache is pretty simple. I'd give
details, but I did it two and a half years ago and can' remember what
I did. :-) I run it only when I'm testing my local copy of my site.
I do have a file server. It's called Windows XP. I do not have a Web
server.

I have links in my Web pages in the form "../../index.html". On my PC,
Windows seems to handle these okay. Both SeaMonkey and IE successfully
navigate such links up two directories when found in LOCAL files on my PC.

Having been a UNIX programmer (my SSI scripts are still in UNIX Korn), I
know not to create Web page links in the form "./something.html".
Instead, the link would be "something.html", avoiding a gratuitious
reference to the current directory.

If the file name "euro.gif" is bad, what about the file name
"euro.discuss.symbol"? Both are in the form of UNIX file names and
comply with RFC 3986 (which allows periods in the names).

--
David E. Ross
<http://www.rossde.com/>

Natural foods can be harmful: Look at all the
people who die of natural causes.
Aug 22 '07 #15

P: n/a
On Wed, 22 Aug 2007, David E. Ross wrote:
If the file name "euro.gif" is bad,
The file name euro.gif is not bad. But you could omit ".gif" in
the address (URI) when you have several formats, say GIF and PNG.
http://www.w3.org/Icons/valid-html401
http://www.w3.org/Icons/
what about the file name "euro.discuss.symbol"?
It's pointless. See
http://httpd.apache.org/docs/2.0/con...on.html#naming
for some useful examples.
Aug 22 '07 #16

P: n/a
André Gillibert wrote:
Ed Mullen wrote:
>And ... enjoying concurrence with all David said ... I maintain
several Gigabytes of Web space quite successfully with the model we're
describing. I'm not about to fix what ain't broke

How do you know it ain't broke?
I was referring to my own sites. They ain't broke. No broken links
other than, perhaps, the occasional one to an external site. Which I
periodically catch. And what says links shouldn't die? If they're of
no use any more, kill them. I clean out my basement periodically. Why
not Web pages?

--
Ed Mullen
http://edmullen.net
http://mozilla.edmullen.net
http://abington.edmullen.net
Ambivalent? Well, yes and no.
Aug 22 '07 #17

P: n/a
..oO(Ed Mullen)
>And what says links shouldn't die? If they're of
no use any more, kill them. I clean out my basement periodically. Why
not Web pages?
Depends on how it's done.

A killed URL should be answered with an appropriate HTTP status code,
for example 410. Of course much better in many cases would be a 3xx
redirect to another page. Even a more userfriendly error page, maybe
with some additional hints or a search form, would be much better than
just the usual "404 - Not Found".

Not doing this and killing URLs silently leads to link rot, which is one
of the Web's major problems.

Micha
Aug 22 '07 #18

P: n/a
Ed Mullen wrote:
André Gillibert wrote:
I was referring to my own sites. They ain't broke. No broken links
other than, perhaps, the occasional one to an external site. Which I
periodically catch.
Good, updating your links is great, but I was refering to links from
outside your site to your site.
And what says links shouldn't die? If they're of no use any more, kill
them.
*Very* few resources that once were valuable, are of no use...
Even material that seems to be "obsolete" has an historical value.

Moreover, moving resources (301, often followed by 404 in a short or long
delay) is always harmful, as a moved resource must have some value,
otherwise it would have been removed (410 or 404) in first place!

I feel bad when I see that one year back I had found something valuable,
but I had partially forgotten it, and I discover that the resource
vanished. In that case, I see that my *brain* has a more reliable and
persistent memory than the Web!

Resources that one may judge valuable may be judged useless by most
people, including the resource's author. For example, *I* would find
precious, the specification of an arcane computer architecture, even 20
years after the last computer using this architecture died.

--
You can contact me at <ta*****************@yahoDELETETHATo.fr>
Aug 22 '07 #19

P: n/a
On 8/22/2007 9:19 AM, Andreas Prilop wrote:
On Wed, 22 Aug 2007, David E. Ross wrote:
>If the file name "euro.gif" is bad,

The file name euro.gif is not bad. But you could omit ".gif" in
the address (URI) when you have several formats, say GIF and PNG.
http://www.w3.org/Icons/valid-html401
http://www.w3.org/Icons/
>what about the file name "euro.discuss.symbol"?

It's pointless. See
http://httpd.apache.org/docs/2.0/con...on.html#naming
for some useful examples.
That seems to imply that the URI <http://www.rossde.com/eurowould
deliver the file euro.html from my www.rossde.com domain. Knowing that
my ISP's Web server is indeed Apache, I tried it. The result was 404.

--
David E. Ross
<http://www.rossde.com/>

Natural foods can be harmful: Look at all the
people who die of natural causes.
Aug 22 '07 #20

P: n/a
Wed, 22 Aug 2007 08:37:14 -0700 from David E. Ross
<no****@nowhere.not>:
On 8/22/2007 4:20 AM, Stan Brown wrote:
I bet most of us replicate our sites on our local PCs for backup and
testing.

But that does mean having a server, though not necessarily a public
server. A browser doesn't know what to do with a link like "./" or
"./math/" -- it needs a server to tell it to translate that into
index.htm or default.html or whatever.
I have links in my Web pages in the form "../../index.html". On my PC,
Windows seems to handle these okay. Both SeaMonkey and IE successfully
navigate such links up two directories when found in LOCAL files on my PC.
Of course they do, but so what? You haven't refuted my point.

--
Stan Brown, Oak Road Systems, Tompkins County, New York, USA
http://OakRoadSystems.com/
HTML 4.01 spec: http://www.w3.org/TR/html401/
validator: http://validator.w3.org/
CSS 2.1 spec: http://www.w3.org/TR/CSS21/
validator: http://jigsaw.w3.org/css-validator/
Why We Won't Help You:
http://diveintomark.org/archives/200..._wont_help_you
Aug 23 '07 #21

P: n/a
Ed Mullen wrote:
I don't care how easy it is, I don't need it or the extra maintenance
issue. Everything is working just fine using relative paths locally and
in the files online.
As long as your pages are purely static html and no server-side...a
private development server is needed to test more dynamically generated
sites. Also many Windows newbies can be stung Windows case-insensitivity
and are surprised when locally functioning sites break when uploaded...

--
Take care,

Jonathan
-------------------
LITTLE WORKS STUDIO
http://www.LittleWorksStudio.com
Aug 23 '07 #22

P: n/a
Jonathan N. Little wrote:
Ed Mullen wrote:
>I don't care how easy it is, I don't need it or the extra maintenance
issue. Everything is working just fine using relative paths locally
and in the files online.

As long as your pages are purely static html and no server-side...a
private development server is needed to test more dynamically generated
sites. Also many Windows newbies can be stung Windows case-insensitivity
and are surprised when locally functioning sites break when uploaded...
Fair enough. I'm aware of case issues and am not a newbie. But do you
think that a newbie or casual Web page creator could install and manage
a server? And figure out how to use and manage it? If it's the newbie
we're worried about, I'd be as likely to suggest to them installing a
server as installing Linux or buying a mainframe. Or taking up brain
surgery.

I appreciate that many here are "professional" developers: It's why I
read the group, to garner bits of wisdom from those who know more than I
do. Still, not everyone who wants to engage in Web page creation is
running a business or developing for one. Many are simply engaged in it
as a hobby, some more demanding than others. Some of those (like me)
want to do as much as they can to "do it right." But there are
practical limits with any such endeavor.
--
Ed Mullen
http://edmullen.net
http://mozilla.edmullen.net
http://abington.edmullen.net
All of us could take a lesson from the weather. It pays no attention to
criticism.
Aug 23 '07 #23

P: n/a
Ed Mullen wrote:
Jonathan N. Little wrote:
>Ed Mullen wrote:
>>I don't care how easy it is, I don't need it or the extra maintenance
issue. Everything is working just fine using relative paths locally
and in the files online.

As long as your pages are purely static html and no server-side...a
private development server is needed to test more dynamically
generated sites. Also many Windows newbies can be stung Windows
case-insensitivity and are surprised when locally functioning sites
break when uploaded...

Fair enough. I'm aware of case issues and am not a newbie. But do you
think that a newbie or casual Web page creator could install and manage
a server? And figure out how to use and manage it? If it's the newbie
we're worried about, I'd be as likely to suggest to them installing a
server as installing Linux or buying a mainframe. Or taking up brain
surgery.
Apache is really ease to setup and *really* easy on Linux. Maybe newbies
could haul out an old box from the closet and slap Ununtu onto it and
have a virtual server up in no time...about a dozen lines of text in
/etc/httpd/conf/vhosts/Vhosts.conf or use WebAdmin really easy for the
GUI addicted.

--
Take care,

Jonathan
-------------------
LITTLE WORKS STUDIO
http://www.LittleWorksStudio.com
Aug 23 '07 #24

P: n/a
"David E. Ross" <no****@nowhere.notwrites:
On 8/22/2007 9:19 AM, Andreas Prilop wrote:
>On Wed, 22 Aug 2007, David E. Ross wrote:
>>If the file name "euro.gif" is bad,

The file name euro.gif is not bad. But you could omit ".gif" in
the address (URI) when you have several formats, say GIF and PNG.
http://www.w3.org/Icons/valid-html401
http://www.w3.org/Icons/
>>what about the file name "euro.discuss.symbol"?

It's pointless. See
http://httpd.apache.org/docs/2.0/con...on.html#naming
for some useful examples.

That seems to imply that the URI <http://www.rossde.com/eurowould
deliver the file euro.html from my www.rossde.com domain. Knowing that
my ISP's Web server is indeed Apache, I tried it. The result was
404.
Did you turn MultiViews on? The server may have been set up to
disallow this setting, but it is usually considered harmless and I
have not had any (Apache) web space where it could not be turned on.

--
Ben.
Aug 23 '07 #25

P: n/a
On Wed, 22 Aug 2007, David E. Ross wrote:
That seems to imply that the URI <http://www.rossde.com/eurowould
deliver the file euro.html from my www.rossde.com domain. Knowing that
my ISP's Web server is indeed Apache, I tried it. The result was 404.
Then write into your .htaccess file

Options +Multiviews

But if you have only one resource euro.html it doesn't make
much sense to change the existing and known address. It is useful
when you have several versions such as euro.en.html euro.fr.html
euro.de.html .

You should also write into your .htaccess file

AddCharset windows-1252 html

and delete the silly <meta http-equivfrom your files.
http://www.unics.uni-hannover.de/nht...a-http-equiv.1
http://www.unics.uni-hannover.de/nht...a-http-equiv.2
Aug 23 '07 #26

P: n/a
On 8/23/2007 1:13 AM, Andreas Prilop wrote:
On Wed, 22 Aug 2007, David E. Ross wrote:
>That seems to imply that the URI <http://www.rossde.com/eurowould
deliver the file euro.html from my www.rossde.com domain. Knowing that
my ISP's Web server is indeed Apache, I tried it. The result was 404.

Then write into your .htaccess file

Options +Multiviews

But if you have only one resource euro.html it doesn't make
much sense to change the existing and known address. It is useful
when you have several versions such as euro.en.html euro.fr.html
euro.de.html .

You should also write into your .htaccess file

AddCharset windows-1252 html

and delete the silly <meta http-equivfrom your files.
http://www.unics.uni-hannover.de/nht...a-http-equiv.1
http://www.unics.uni-hannover.de/nht...a-http-equiv.2
Actually, most of my pages use ISO-8859-1. The euro page and one or two
others use WINDOWS-1252 because I'm demonstrating a WINDOWS-1252
capability.

--
David E. Ross
<http://www.rossde.com/>

Natural foods can be harmful: Look at all the
people who die of natural causes.
Aug 23 '07 #27

P: n/a
On Thu, 23 Aug 2007, David E. Ross wrote:
>You should also write into your .htaccess file
AddCharset windows-1252 html
and delete the silly <meta http-equivfrom your files.
http://www.unics.uni-hannover.de/nht...a-http-equiv.1

Actually, most of my pages use ISO-8859-1.
Fine.
But my point is to define the encoding (charset) in the HTTP header
- whether it is ISO-8859-1 or Windows-1252 or UTF-8 or whatever.

Windows-1252 is a superset of ISO-8859-1. So when ISO-8859-1
is correct, then Windows-1252 is also correct.
Aug 23 '07 #28

P: n/a
On 8/22/2007 7:00 PM, Jonathan N. Little wrote:
Ed Mullen wrote:
>I don't care how easy it is, I don't need it or the extra maintenance
issue. Everything is working just fine using relative paths locally and
in the files online.

As long as your pages are purely static html and no server-side...a
private development server is needed to test more dynamically generated
sites. Also many Windows newbies can be stung Windows case-insensitivity
and are surprised when locally functioning sites break when uploaded...
I use a secure Telnet session into a shell account on my ISP's Web
server to test SSI scripts. I then insert the markup into my local HTML
files to make sure the HTML is not broken. The W3C validators ignore
that markup since it's in the form of HTML comments. Finally, I upload
both the HTML and script files to my ISP's server for final testing.

Since I was well experienced coding with UNIX before I ever saw a PC
with Windows, I'm well familiar with case-sensitivity. In fact, my UNIX
experience is what pushes me to continue using file names such as
euro.html. Extensions mean nothing in UNIX; they're just part of a file
name.

Coding UNIX, I got used to using underscores (_) in place of spaces
within file names and periods to set off my own indication of a file
type at the end. This was a personal convention that helped me to
identify files without having to open them or look them up in some
listing. Using the same name but different "extensions" allowed me to
group related files while still distinguishing them (e.g., similar
scripts in UNIX-C (.csh) and UNIX Korn (.ksh) for use on different
machines). All this was very effective in creating large scripts used
by others to maintain a software system for tracking and operating space
satellites for the Pentagon.

--
David E. Ross
<http://www.rossde.com/>

Natural foods can be harmful: Look at all the
people who die of natural causes.
Aug 23 '07 #29

P: n/a
On 8/23/2007 8:16 AM, Andreas Prilop wrote:
On Thu, 23 Aug 2007, David E. Ross wrote:
>>You should also write into your .htaccess file
AddCharset windows-1252 html
and delete the silly <meta http-equivfrom your files.
http://www.unics.uni-hannover.de/nht...a-http-equiv.1
Actually, most of my pages use ISO-8859-1.

Fine.
But my point is to define the encoding (charset) in the HTTP header
- whether it is ISO-8859-1 or Windows-1252 or UTF-8 or whatever.

Windows-1252 is a superset of ISO-8859-1. So when ISO-8859-1
is correct, then Windows-1252 is also correct.
The W3C HTML validator recognizes
<META HTTP-EQUIV="Content-Type" content="text/html; charset=ISO-8859-1">
(or whatever other charset I might specify). For ISO-8859-1, the
validator rejects escape-coded Windows characters (e.g., Alt-0133 for
ellipses); for Windows-1252, the validator accepts them.

--
David E. Ross
<http://www.rossde.com/>

Natural foods can be harmful: Look at all the
people who die of natural causes.
Aug 23 '07 #30

P: n/a
David E. Ross wrote:
On 8/23/2007 8:16 AM, Andreas Prilop wrote:
>But my point is to define the encoding (charset) in the HTTP header
- whether it is ISO-8859-1 or Windows-1252 or UTF-8 or whatever.

Windows-1252 is a superset of ISO-8859-1. So when ISO-8859-1
is correct, then Windows-1252 is also correct.

The W3C HTML validator recognizes
<META HTTP-EQUIV="Content-Type" content="text/html; charset=ISO-8859-1">
(or whatever other charset I might specify). For ISO-8859-1, the
validator rejects escape-coded Windows characters (e.g., Alt-0133 for
ellipses); for Windows-1252, the validator accepts them.
Yep, that's what Andreas Prilop called a "superset".
That doesn't change the fact that the Windows-1252 charset should be
specified in the HTTP header.

Actually, I don't really understand the motivation of your answer.

--
You can contact me at <ta*****************@yahoDELETETHATo.fr>
Aug 23 '07 #31

P: n/a
Wed, 22 Aug 2007 21:26:56 -0400 from Ed Mullen <ed@edmullen.net>:
Stan Brown wrote:
Tue, 21 Aug 2007 20:07:15 -0700 from David E. Ross
<no****@nowhere.not>:
As Ed Mullen pointed out, I'm not interested in having a server on my
PC. I replicate my Web site on my PC (1) to test pages before uploading
them and (2) as a backup in case my ISP's Web server crashes.
I bet most of us replicate our sites on our local PCs for backup and
testing.

I do.

But that does mean having a server, though not necessarily a public
server.

It does not require me to have a Web server running on my local machine.
Sorry, but that's exactly what it means. I can't speak for UNIX, but
if you're on Windoze as I am then the operating system does not know
what a URL ending in a / mark (or /# plus an anchor) means. And if
you don't have a server, there is nothing to tell your browser what
it means.

You can get around that by specifying the index page everywhere:
"./index.htm" instead of "./" and so forth. I'll leave it to Jukka to
explain why that's a bad idea -- it was he who persuaded me to stop
doing that and start letting index pages be "./" and similar.

--
Stan Brown, Oak Road Systems, Tompkins County, New York, USA
http://OakRoadSystems.com/
HTML 4.01 spec: http://www.w3.org/TR/html401/
validator: http://validator.w3.org/
CSS 2.1 spec: http://www.w3.org/TR/CSS21/
validator: http://jigsaw.w3.org/css-validator/
Why We Won't Help You:
http://diveintomark.org/archives/200..._wont_help_you
Aug 24 '07 #32

P: n/a
Stan Brown wrote:
Wed, 22 Aug 2007 21:26:56 -0400 from Ed Mullen <ed@edmullen.net>:
>Stan Brown wrote:
>>Tue, 21 Aug 2007 20:07:15 -0700 from David E. Ross
<no****@nowhere.not>:
As Ed Mullen pointed out, I'm not interested in having a server on my
PC. I replicate my Web site on my PC (1) to test pages before uploading
them and (2) as a backup in case my ISP's Web server crashes.
I bet most of us replicate our sites on our local PCs for backup and
testing.
I do.
>>But that does mean having a server, though not necessarily a public
server.
It does not require me to have a Web server running on my local machine.

Sorry, but that's exactly what it means. I can't speak for UNIX, but
if you're on Windoze as I am then the operating system does not know
what a URL ending in a / mark (or /# plus an anchor) means. And if
you don't have a server, there is nothing to tell your browser what
it means.

You can get around that by specifying the index page everywhere:
"./index.htm" instead of "./" and so forth. I'll leave it to Jukka to
explain why that's a bad idea -- it was he who persuaded me to stop
doing that and start letting index pages be "./" and similar.
Quite true. I was reading too fast and misinterpreting "./" in the
sense of a path, not a complete URL. However, my point, my thinking, is
that I never would specify a URL that does not contain the full file
name, all the arguments to the contrary. In that case, I don't need a
server on my local machine. It works for having the the local copy of
my site be identical to the site server for development purposes.

--
Ed Mullen
http://edmullen.net
http://mozilla.edmullen.net
http://abington.edmullen.net
There is no reason anyone would want a computer in their home. - Ken
Olson, president, chairman and founder of Digital Equipment Corp., 1977
Aug 24 '07 #33

P: n/a
On 8/23/2007 3:07 PM, André Gillibert wrote:
David E. Ross wrote:
>On 8/23/2007 8:16 AM, Andreas Prilop wrote:
>>But my point is to define the encoding (charset) in the HTTP header
- whether it is ISO-8859-1 or Windows-1252 or UTF-8 or whatever.

Windows-1252 is a superset of ISO-8859-1. So when ISO-8859-1
is correct, then Windows-1252 is also correct.
The W3C HTML validator recognizes
<META HTTP-EQUIV="Content-Type" content="text/html; charset=ISO-8859-1">
(or whatever other charset I might specify). For ISO-8859-1, the
validator rejects escape-coded Windows characters (e.g., Alt-0133 for
ellipses); for Windows-1252, the validator accepts them.

Yep, that's what Andreas Prilop called a "superset".
That doesn't change the fact that the Windows-1252 charset should be
specified in the HTTP header.

Actually, I don't really understand the motivation of your answer.
I was explaining that I use a <METAtag to specify character set rather
than using a server command to specify it.

--
David E. Ross
<http://www.rossde.com/>

Natural foods can be harmful: Look at all the
people who die of natural causes.
Aug 24 '07 #34

P: n/a
On 8/23/2007 5:30 PM, Stan Brown wrote:
Wed, 22 Aug 2007 21:26:56 -0400 from Ed Mullen <ed@edmullen.net>:
>Stan Brown wrote:
>>Tue, 21 Aug 2007 20:07:15 -0700 from David E. Ross
<no****@nowhere.not>:
As Ed Mullen pointed out, I'm not interested in having a server on my
PC. I replicate my Web site on my PC (1) to test pages before uploading
them and (2) as a backup in case my ISP's Web server crashes.
I bet most of us replicate our sites on our local PCs for backup and
testing.
I do.
>>But that does mean having a server, though not necessarily a public
server.
It does not require me to have a Web server running on my local machine.

Sorry, but that's exactly what it means. I can't speak for UNIX, but
if you're on Windoze as I am then the operating system does not know
what a URL ending in a / mark (or /# plus an anchor) means. And if
you don't have a server, there is nothing to tell your browser what
it means.

You can get around that by specifying the index page everywhere:
"./index.htm" instead of "./" and so forth. I'll leave it to Jukka to
explain why that's a bad idea -- it was he who persuaded me to stop
doing that and start letting index pages be "./" and similar.
Links in my pages to other pages in my Web site always give the file
name, never using the default. As explained earlier by Brown, the
default is a function of the server. If I were to move my Web site from
my current ISP to another ISP, I don't want to be concerned about
different default file names. Thus, my links are of the form
<http://www.rossde.com/PGP/index.htmland not
<http://www.rossde.com/PGP/>. That way, I won't have to change it to
<http://www.rossde.com/PGP/home.html(because home.html is the new
server's default file name) or alternatively contrive to override my new
ISP's setup.

--
David E. Ross
<http://www.rossde.com/>

Natural foods can be harmful: Look at all the
people who die of natural causes.
Aug 24 '07 #35

P: n/a
On Thu, 23 Aug 2007, David E. Ross wrote:
I was explaining that I use a <METAtag to specify character set rather
than using a server command to specify it.
But you should know that <meta http-equivis only a paper moon,
a poor ersatz for the real thing, monopoly money instead of
real bucks.
Aug 24 '07 #36

P: n/a
On Fri, 24 Aug 2007, Jukka K. Korpela wrote:
On the other hand, Andreas is completely right in suggesting that authors
should primarily try and find the way to set the real HTTP headers.
There is also another argument. The simpletons who know only
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
and who have no idea about HTTP headers are inclined to write

<meta http-equiv="Content-Type" content="application/xhtml+xml">

to change their content-type.
Examples: http://it.bacal.de/ and http://www.bacal.de/

That won't work of course.
Other clueless authors have

c h a r s e t = u t f - 1 6

in their page source to set the encoding to UTF-16.
But to be able to read this, you must know *in advance*
that the encoding (charset) is UTF-16.
The cluelest of the clueless even have

charset=utf-16

written in US-ASCII in the page source of an otherwise UTF-16-
encoded page.
Aug 24 '07 #37

P: n/a
On 8/20/2007 12:08 AM, te*@shapin.org wrote:
I have an old link that was widely distributed. I would now like to
put a link on that old
page that will go to a new page without displaying anything.
This thread has become a forum where people (including me,
unfortunately) are quibbling about issues of design and style. What I'm
doing in my Web site generally meets the W3C HTML 4.01 specification
(except for some very old pages that I haven't updated) and the HTTP
RFCs. That is, I comply with standards and conventions even if I do not
satisfy other individuals' concepts of elegance. I'm not breaking any
browsers. Further, how I develop and maintain my pages without a real
local server is satisfactory to me even if others feel I need to have a
server.

Therefore, I'm bowing out of this thread.

--

David E. Ross
<http://www.rossde.com/>.

The only reason we have so many laws is that not enough people will do
the right thing. (© 1997)
Aug 24 '07 #38

P: n/a
Scripsit Helmut Richter:
The problem of the server admins is not only that they could be
somewhat insane (evil or paranoic) but that they have to inform their
authors about the ways of affecting the HTTP headers.
I agree that they _should_ do that, but in most cases, authors can find it
out themselves if they are interested. I was thinking about the damage that
admins actively create by disallowing .htaccess to affect Content-Type (even
though this might these days be the default in some server software, it's
bad, wrong, and clueless).

--
Jukka K. Korpela ("Yucca")
http://www.cs.tut.fi/~jkorpela/

Aug 26 '07 #39

This discussion thread is closed

Replies have been disabled for this discussion.