473,382 Members | 1,425 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,382 software developers and data experts.

Questions about XHTML <head> Content

This is a rather general subject, I apologize.

I am new to XHTML, CSS, et al and I am having trouble
understanding the DTD and xml namespace declarations.

For example:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DT¬D/xhtml1-strict.dtd">

<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">

In particular, what do the URL's provide?

In the case of the DTD, the URL points to a file containing
lots of "<!ENTITY ...". What is this file?

The xmlns URL points to a directory. Is there something in there?

Secondly, what is the proper way to specify the charset
for XHTML Strict?

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
Or
<?xml version="1.0" encoding=" UTF-8" ?>
Chad

Jul 24 '05 #1
17 2147
On 19 May 2005, cc*****@yahoo.com wrote:
Secondly, what is the proper way to specify the charset
for XHTML Strict?

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
No. As the designation "http-equiv" suggests, this is only an ersatz.
http://ppewww.ph.gla.ac.uk/~flavell/...t/ns-burp.html
Or
<?xml version="1.0" encoding=" UTF-8" ?>


This is the proper way to declare the encoding for *the document*.
However, you should always set the HTTP header accordingly
when you serve this document via HTTP:
http://www.w3.org/International/O-HTTP-charset.html

--
Everybody expects the German Inquisition.

Jul 24 '05 #2


cc*****@yahoo.com wrote:

I am new to XHTML, CSS, et al and I am having trouble
understanding the DTD and xml namespace declarations.

For example:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DT¬D/xhtml1-strict.dtd">

<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">

In particular, what do the URL's provide?

In the case of the DTD, the URL points to a file containing
lots of "<!ENTITY ...". What is this file?
An XML DTD (document type definition) that defines the grammar of XHTML
1.0 documents. You can use a validator then to check whether the
document adheres to the declared grammar.
The xmlns URL points to a directory. Is there something in there?
A namespace URL is a name, URLs are used to provide a world-wide unique
name but usually the URL does not point to a resource.
Secondly, what is the proper way to specify the charset
for XHTML Strict?

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
Or
<?xml version="1.0" encoding=" UTF-8" ?>


It depends on how you want your XHTML to be treated, for an XML parser
you need to declare the encoding in the XML declaration (although for
UTF-8 and UTF-16 the declaration is optional).
Many people however use XHTML to serve it as text/html to HTML browsers
like IE and then the XML declaration is ignored so in that case it makes
sense to use the meta element (unless you do as Andreas already
suggested and make use of HTTP response headers).
--

Martin Honnen
http://JavaScript.FAQTs.com/
Jul 24 '05 #3
In article
<Pi*************************************@s5b004.rr zn.uni-hannover.de>,
Andreas Prilop <nh******@rrzn-user.uni-hannover.de> wrote:
<?xml version="1.0" encoding=" UTF-8" ?>


This is the proper way to declare the encoding for *the document*.
However, you should always set the HTTP header accordingly
when you serve this document via HTTP:
http://www.w3.org/International/O-HTTP-charset.html


The charset parameter is not necessary with application/xml or
application/*+xml. Considering Ruby's Postulate, one might even argue
using the charset parameter is a bad idea in those cases.

--
Henri Sivonen
hs******@iki.fi
http://hsivonen.iki.fi/
Mozilla Web Author FAQ: http://mozilla.org/docs/web-developer/faq.html
Jul 24 '05 #4
Please correct me if I'm wrong,

Reading http://www.xml.com/pub/a/1999/01/namespaces.html,

The name of the namespace is the URL.
Why use a URL? Because they are unique?
Couldn't it be any string?

In contrast, xmlns:xdc="http://www.xml.com/books" defines a namespace
"http://www.xml.com/books" with the prefix xdc.

Chad

Jul 24 '05 #5
On Thu, 19 May 2005, Henri Sivonen wrote:
However, you should always set the HTTP header accordingly
when you serve this document via HTTP:
http://www.w3.org/International/O-HTTP-charset.html
The charset parameter is not necessary with application/xml or
application/*+xml.


So you exclude all browsers that understand only text/html
or you need to serve your documents with different types.
[I wrote "should" and you wrote "is not necessary", BTW.]
Considering Ruby's Postulate, one might even argue
using the charset parameter is a bad idea in those cases.


Please explain!

--
I used to believe in reincarnation in a former live.

Jul 24 '05 #6


cc*****@yahoo.com wrote:

Reading http://www.xml.com/pub/a/1999/01/namespaces.html,

The name of the namespace is the URL.
Why use a URL? Because they are unique?
I have already said that, yes, to have a globally unique name.
Couldn't it be any string?


See
<http://www.w3.org/TR/REC-xml-names/>
it says that a namespace name is a URI reference.
--

Martin Honnen
http://JavaScript.FAQTs.com/
Jul 24 '05 #7
On Thu, 19 May 2005 09:33:41 -0700, cc*****@yahoo.com wrote:
Please correct me if I'm wrong,

Reading http://www.xml.com/pub/a/1999/01/namespaces.html,

The name of the namespace is the URL.
Why use a URL? Because they are unique?
Couldn't it be any string?


Yes, it could be any string.
The reason it's usually a URI is so that:
1) It's unique. If you own the domain/url, then there's no way somebody
else is going to accidentally create a protocol/format that's identified
the same way
2) Some follow a convention of posting the DTD definition of that format
at the URL described in the namespace.
--
Jeremy Nickurak -= Email: at***@shawng.spam.rifetech.com =-
Remember, when you buy major label CD's, you're paying
companies to sue families and independant music. Learn
more now at downhillbattle.org.

Jul 24 '05 #8
On Thu, 19 May 2005, Henri Sivonen wrote:
The [HTTP] charset parameter is not necessary with application/xml or
application/*+xml.


However, the WWW doesn't seem fully ready for that yet, and meantime
we need to use text/* content types for at least some clients (not all
of which are the browser-like operating system component that so many
folks mistake for a WWW browser).

CERT CA-2000-02 still says it's security-relevant that servers need to
send out text/* content types with an explicit charset specification,
even if it wasn't known to be good practice anyway. And, since the
HTTP specification is the final appeal on this (says RFC2616), it had
better be correct.
Jul 24 '05 #9
cc*****@yahoo.com wrote:
Please correct me if I'm wrong,

Reading http://www.xml.com/pub/a/1999/01/namespaces.html,

The name of the namespace is the URL.
Why use a URL? Because they are unique?
Couldn't it be any string?

In contrast, xmlns:xdc="http://www.xml.com/books" defines a namespace
"http://www.xml.com/books" with the prefix xdc.


Yes, you're right.

Using http:// URLs for the purpose is particularly stupid, because
the purpose of HTTP is to enable us to access a resource at the URL
in question (dereference the URL). And as soon as you do that, all
the assumptions underlying namespaces fall down. This leads to
serious confusion: see for example
http://lists.w3.org/Archives/Public/...2Jul/0232.html

--
Nick Kew
Jul 24 '05 #10
In article
<Pi*************************************@s5b004.rr zn.uni-hannover.de>,
Andreas Prilop <nh******@rrzn-user.uni-hannover.de> wrote:
On Thu, 19 May 2005, Henri Sivonen wrote:
However, you should always set the HTTP header accordingly
when you serve this document via HTTP:
http://www.w3.org/International/O-HTTP-charset.html


The charset parameter is not necessary with application/xml or
application/*+xml.


So you exclude all browsers that understand only text/html
or you need to serve your documents with different types.


No. Currently I am using HTML 4.01 as text/html for my pages.

I pointed out that "always" only applies to text/* types. (And of those
text/xml is considered harmful.)
Considering Ruby's Postulate, one might even argue
using the charset parameter is a bad idea in those cases.


Please explain!


Ruby's Postulate states that "The accuracy of metadata is inversely
proportional to the square of the distance between the data and the
metadata."[1] That is, the encoding stated in the XML declaration is
more likely to be the encoding actually used that the encoding stated in
the charset parameter of the HTTP Content-Type header. Or more to the
point, the HTTP charset is more likely to be wrong. That's why it would
better to be silent about the encoding on the HTTP level and use the XML
declaration with application/* types.

If you serve bags of bytes using Apache, the piece of software that
builds the bag of bytes has a better chance of saying right things about
it than Apache looking at the file name extension.

Would you trust HTTP over the ZIP file itself on the matter of the
compression method used in a ZIP file?

[1] http://intertwingly.net/slides/2004/devcon/69.html

--
Henri Sivonen
hs******@iki.fi
http://hsivonen.iki.fi/
Mozilla Web Author FAQ: http://mozilla.org/docs/web-developer/faq.html
Jul 24 '05 #11
On Fri, 20 May 2005, Henri Sivonen wrote:

[...]
the encoding stated in the XML declaration is
more likely to be the encoding actually used that
"than" ?
the encoding stated in
the charset parameter of the HTTP Content-Type header.
That doesn't follow. Consider, to take just one example, a platform
where documents are stored in DBCS EBCDIC, but served out to HTTP as
utf-8. Which encoding are you going to advertise? Only one is
correct.
Or more to the point, the HTTP charset is more likely to be wrong.
The HTTP charset is definitive. If it's wrong then there's only one
permissible move: correct it. Taking it out is no solution.
That's why it would better to be silent about the encoding on the
HTTP level
That may be your personal view, but it's not the message conveyed by
the usual sources of advice (including W3C and IETF, and CERT), at
least in relation to text/* content types.
If you serve bags of bytes using Apache, the piece of software that
builds the bag of bytes has a better chance of saying right things
about it than Apache looking at the file name extension.


Don't ignore the issue of transcoding. The document itself has no
idea what character encoding it's going to be served out as. Only the
software that does the transcoding can be sure of that.

So your argument may be statistically probable (in the sense that
statistically most platforms serve out the same encoding as they
store locally), but it's theoretically unsound. And check that
CA-2000-02 again for security relevance.
http://www.cert.org/tech_tips/malici...igation.html#3

Jul 24 '05 #12
In article <Pi*******************************@ppepc56.ph.gla. ac.uk>,
"Alan J. Flavell" <fl*****@ph.gla.ac.uk> wrote:
On Fri, 20 May 2005, Henri Sivonen wrote:

[...]
the encoding stated in the XML declaration is
more likely to be the encoding actually used that
"than" ?


Yes.
the encoding stated in
the charset parameter of the HTTP Content-Type header.


That doesn't follow. Consider, to take just one example, a platform
where documents are stored in DBCS EBCDIC, but served out to HTTP as
utf-8. Which encoding are you going to advertise? Only one is
correct.


Serve out as UTF-8. Advertise in XML declaration as UTF-8. Store as
UTF-8 bytes. (If another kind of storage is desired, the burden of
getting what is shown to the outside world right is on the party who
wishes to use legacy encodings privately.)
Or more to the point, the HTTP charset is more likely to be wrong.


The HTTP charset is definitive. If it's wrong then there's only one
permissible move: correct it. Taking it out is no solution.


For application/xml and application/xhtml+xml taking it out and using
the XML declaration is very much a solution. Just like leaving the
character encoding of an application/msword as an internal matter is a
solution. As far as HTTP is concerned, these are bags of bytes and the
recipient can look inside them to determine how to decode them.
That's why it would better to be silent about the encoding on the
HTTP level


That may be your personal view, but it's not the message conveyed by
the usual sources of advice (including W3C and IETF, and CERT), at
least in relation to text/* content types.


I am specifically talking about application/xml and application/*+xml. I
am not talking about text/* (beyond saying that text/xml is considered
harmful).

The W3C TAG thinks (as do I) that the "STRONGLY RECOMMENDED" wrt.
charset on application/* types in RFC 3023 is misplaced.
http://www.w3.org/2001/tag/2002/0129-mime#char-encoding
If you serve bags of bytes using Apache, the piece of software that
builds the bag of bytes has a better chance of saying right things
about it than Apache looking at the file name extension.


Don't ignore the issue of transcoding. The document itself has no
idea what character encoding it's going to be served out as. Only the
software that does the transcoding can be sure of that.


Transcoding an application/xml or application/*+xml resource
representation that is encoded as either UTF-8 or UTF-16 is useless and
harmful. A transcoder has no business touching those bytes. Just like it
has no business tampering the internals of an application/msword
resource representation.

Serving out an application/xml or application/*+xml resource
representation that is not encoded as either UTF-8 or UTF-16 is also
harmful.
So your argument may be statistically probable (in the sense that
statistically most platforms serve out the same encoding as they
store locally), but it's theoretically unsound.


So are you suggesting that ZIP files knowing their own compression
method(s) is theoretically unsound, too?

--
Henri Sivonen
hs******@iki.fi
http://hsivonen.iki.fi/
Mozilla Web Author FAQ: http://mozilla.org/docs/web-developer/faq.html
Jul 24 '05 #13
On Fri, 20 May 2005, Henri Sivonen wrote:
That doesn't follow. Consider, to take just one example, a platform
where documents are stored in DBCS EBCDIC, but served out to HTTP as
utf-8. Which encoding are you going to advertise? Only one is
correct.
Serve out as UTF-8.


As I said, yes.
Advertise in XML declaration as UTF-8.
OK. So, local access to the data will be wrong. Maybe that's not a
problem, but at least one needs to be aware of it.
Store as UTF-8 bytes.
In the situations that I have in mind, that's not open to you to
regulate. Content will be stored as native text files, that's a
"given" in these scenarios.
(If another kind of storage is desired, the burden of getting what
is shown to the outside world right is on the party who wishes to
use legacy encodings privately.)


Which is what I've been saying all along. But at least you need to
have a procedure which isn't incompatible with the theory.

In your case that's going to mean that a transcoder has to parse and
modify the document itself, in order to adjust its own idea of what
its character encoding is. It's do-able, of course, but it's
theoretically unsatisfactory IMNSHO.
Or more to the point, the HTTP charset is more likely to be wrong.


The HTTP charset is definitive. If it's wrong then there's only one
permissible move: correct it. Taking it out is no solution.


For application/xml and application/xhtml+xml taking it out and using
the XML declaration is very much a solution.


In the context of this group, I'm still talking about text/* content
types. Application/* is a different issue, and I'm not arguing with
you about that. Any arguments you may have will thus be tangential to
the issues that I have in mind.

Jul 24 '05 #14
In article <Pi*******************************@ppepc56.ph.gla. ac.uk>,
"Alan J. Flavell" <fl*****@ph.gla.ac.uk> wrote:
Advertise in XML declaration as UTF-8.
OK. So, local access to the data will be wrong.


No, it won't if the storage is UTF-8.
Store as UTF-8 bytes.


In the situations that I have in mind, that's not open to you to
regulate. Content will be stored as native text files, that's a
"given" in these scenarios.


Well, the best practice on Mac/Windows is to store UTF-8 and get rid of
the concept of native text files if native means MacRoman/Windows-1252.
Surely IBM mainframes can store bytes. How would you otherwise store PNG
images? Why couldn't you run an UTF-8 editor and IO libraries on IBM
mainframes if it is possible on other kinds of systems?
(If another kind of storage is desired, the burden of getting what
is shown to the outside world right is on the party who wishes to
use legacy encodings privately.)


Which is what I've been saying all along. But at least you need to
have a procedure which isn't incompatible with the theory.


It is not if you think of an XML document as a sequence of bytes and not
as a sequence of characters.
In your case that's going to mean that a transcoder has to parse and
modify the document itself, in order to adjust its own idea of what
its character encoding is. It's do-able, of course, but it's
theoretically unsatisfactory IMNSHO.


Just like Lab TIFF to RGB TIFF conversion requires knowledge about the
format instead of only about a theoretical ideal of a Cartesian grid of
color samples.
> Or more to the point, the HTTP charset is more likely to be wrong.

The HTTP charset is definitive. If it's wrong then there's only one
permissible move: correct it. Taking it out is no solution.


For application/xml and application/xhtml+xml taking it out and using
the XML declaration is very much a solution.


In the context of this group, I'm still talking about text/* content
types. Application/* is a different issue, and I'm not arguing with
you about that. Any arguments you may have will thus be tangential to
the issues that I have in mind.


In my first post to this thread I explicitly said my comment was about
application/xml and application/*+xml. They are, in theory at least :-),
relevant to XHTML.

--
Henri Sivonen
hs******@iki.fi
http://hsivonen.iki.fi/
Mozilla Web Author FAQ: http://mozilla.org/docs/web-developer/faq.html
Jul 24 '05 #15
On Fri, 20 May 2005, Henri Sivonen wrote:
"Alan J. Flavell" <fl*****@ph.gla.ac.uk> wrote:
Advertise in XML declaration as UTF-8.
OK. So, local access to the data will be wrong.


No, it won't if the storage is UTF-8.


The whole point of my illustrative example is that the local storage
isn't utf-8.
In the situations that I have in mind, that's not open to you to
regulate. Content will be stored as native text files, that's a
"given" in these scenarios.


Well, the best practice on Mac/Windows is to store UTF-8 and get rid of
the concept of native text files if native means MacRoman/Windows-1252.


I already told you what "native text files" might be in this scenario
(DBCS EBCDIC), but your examples would also suffice. Windows uses
utf-16, for example.

I say again, in this scenario it's not open to you to choose - other
criteria have already decided that text files are to be stored in the
platform's native encoding, and transcoded into an appropriate
external encoding by the HTTP server.
Surely IBM mainframes can store bytes.
"can", sure.
How would you otherwise store PNG images? Why couldn't you run an
UTF-8 editor and IO libraries on IBM mainframes if it is possible on
other kinds of systems?
It's not about possibility, but about decisions that are outside of
your reach in this scenario. If you don't want to tackle that
possibility then just say so, and we can drop the topic.
Which is what I've been saying all along. But at least you need
to have a procedure which isn't incompatible with the theory.


It is not if you think of an XML document as a sequence of bytes and
not as a sequence of characters.


Thus turning a straightforward text document into a bag of bytes that
makes no sense in itself but (from the theoretical point of view)
needs XML-specific software to make sense of it.

So you'd need a whole new set of XML-specific utilities to replace the
generic text utilities provided by the system.
In your case that's going to mean that a transcoder has to parse
and modify the document itself, in order to adjust its own idea of
what its character encoding is. It's do-able, of course, but it's
theoretically unsatisfactory IMNSHO.


Just like Lab TIFF to RGB TIFF conversion requires knowledge about
the format


No, it's not "just like". text/* MIME types can normally be
transcoded without needing to know anything about their internals: the
fact that they are text is all that one needs to know. Whereas no-one
expects to be able to convert one image format to another without
knowing which image format it is.
In the context of this group, I'm still talking about text/*
content types. Application/* is a different issue, and I'm not
arguing with you about that. Any arguments you may have will thus
be tangential to the issues that I have in mind.


In my first post to this thread I explicitly said my comment was
about application/xml and application/*+xml.


To which I responded:

However, the WWW doesn't seem fully ready for that yet, and meantime
we need to use text/* content types for at least some clients

which you continue to stubbornly disregard, preferring, it seems, to
argue with something that neither I nor any of the other contributors
have actually posted.
They are, in theory at least :-), relevant to XHTML.


You have your own opinions, as has the TAG - which currently diverge
from the published recommendations (RFC3023) indeed.

http://www.w3.org/2001/tag/2002/0129-mime#char-encoding

But this is outside of the scope of what I wanted to discuss. So if
you aren't interested in the text/* scenario, let's drop it, rather
than wasting each others' time with tangential arguments.
Jul 24 '05 #16
In article <Pi*****************************@ppepc56.ph.gla.ac .uk>,
"Alan J. Flavell" <fl*****@ph.gla.ac.uk> wrote:
On Fri, 20 May 2005, Henri Sivonen wrote:
How would you otherwise store PNG images? Why couldn't you run an
UTF-8 editor and IO libraries on IBM mainframes if it is possible on
other kinds of systems?


It's not about possibility, but about decisions that are outside of
your reach in this scenario. If you don't want to tackle that
possibility then just say so, and we can drop the topic.


Well, if you've got an XML document that for whatever reason needs to be
transcoded on the server, the proper way to tackle transcoding is to
connect an XML serializer to an XML parser. (That's what I actually have
done when the the need has been there.)
So you'd need a whole new set of XML-specific utilities to replace the
generic text utilities provided by the system.
Yes. Processing XML as mere text risks breaking the document.
Just like Lab TIFF to RGB TIFF conversion requires knowledge about
the format


No, it's not "just like". text/* MIME types can normally be
transcoded without needing to know anything about their internals: the
fact that they are text is all that one needs to know.


That's true of text/plain. However, text/rtf, text/html, text/xml and
text/css all have the means for storing the character encoding
internally (even if HTTP wants to override that). The reality does not
agree with the theory.
But this is outside of the scope of what I wanted to discuss. So if
you aren't interested in the text/* scenario, let's drop it, rather
than wasting each others' time with tangential arguments.


Ok.

--
Henri Sivonen
hs******@iki.fi
http://hsivonen.iki.fi/
Mozilla Web Author FAQ: http://mozilla.org/docs/web-developer/faq.html
Jul 24 '05 #17
On Sat, 21 May 2005, Henri Sivonen wrote:
"Alan J. Flavell" <fl*****@ph.gla.ac.uk> wrote:
No, it's not "just like". text/* MIME types can normally be
transcoded without needing to know anything about their internals: the
fact that they are text is all that one needs to know.
That's true of text/plain.


And richtext, and comma-separated, and tab-delimited, and lots of
other text/* formats...
However, text/rtf,
Until MS bumbled into using 8-bit encodings (rtf version 1.6), all
text/rtf was encoded in us-ascii, as it tells you in their RTF
specification. So there was no problem with transcoding text/rtf.

That's no longer true with 8-bit data, of course... Their
specification mumbles:

| Document text should be emitted as ANSI characters.

without clearly defining what this "ANSI" character encoding might be,
or how that is supposed to interact with their \mac, \pc or \pca
"control words".

But RTF is really a mess, considering that the content of a
single document can be in several different "Character Sets", none of
which (in rtf 1.6) are necessarily the 8-bit encoding that's used for
representing the RTF.

The specification, as you'd expect, tangles itself in knots confusing
"Character Set" with character encoding. So, yes, text/rtf is now a
specific problem in those terms. And the mess that our users get into
with it (particularly those who attempt collaborative editing of the
same document, some on Macs and some on Peecees) only goes to confirm
my misgivings.
text/html, text/xml and text/css all have the means for storing the
character encoding internally
I know - you'll find me on record from the early days as describing
this procedure as theoretically inappropriate. The character encoding
of a document instance is an external property of the *instance*, not
an inherent property of the *document* itself, and trying to tell the
document itself what encoding it's in, although it has developed into
a widespread practice with WWW documents, is still fundamentally
wrong, the way that I see it.
The reality does not agree with the theory.


We agree on that, although we evidently disagree on what ought to be
done about it. I've lost, due to the hordes of folk whose situation
is simple enough that they don't have to deal with the consequences,
and would rather have a simple fix than a theoretically sound
solution. C'est la vie.
you aren't interested in the text/* scenario, let's drop it, rather
than wasting each others' time with tangential arguments.


Ok.


(But you went on and discussed the text/* scenario anyway. Well, I've
said my piece, so let's indeed drop it now.)
Jul 24 '05 #18

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
by: Ignac Vucko | last post by:
Is writing a document *during* page load safe and supported for all 4th and 5th generation browsers? If not, can you show me a specific example/browser where it causes problems? <html> <head>...
3
by: francescomoi | last post by:
Hi. I'm trying to insert some text between <head> and <body> but I'm not able. I try with: -------- bodyPoint = document.getElementsByTagName('body'); var myLink =...
15
by: Frances | last post by:
<html> <head> <script> function doIt() { var list = document.forms.product; var selItem = list.options.value; ^^^^^^^ </head>
10
by: Brian W | last post by:
Hi All, I have a web user control that, among other things, provides Print this page, and Email this page functionality I have this script that is to execute on the click of the asp:hyperlinks ...
6
by: Ken Varn | last post by:
I want to add my own custom <STYLE> section in the <HEAD> section of my ASP.NET page within a custom control. Can someone tell me how I can have my custom control add tags to the <HEAD> section of...
7
by: ericgla | last post by:
I am creating a web app using asp.net 2.0 where all pages are based a single master page. On some of the aspx pages I need to add javascript to the head tag in order to use Google maps. I tried...
5
by: Arne | last post by:
I need literal controls in the <head> section of my page. I need to be able to load dynamic values in the head section. It works fine in ASP.Net 1.1, but Asp.net 2.0 gives a compile error. I need...
1
by: Frank | last post by:
I would like to create a custom control that generates everything that will be in the <head> </head> section. This would allow me to make changes on all pages withing a site in one location. ...
8
by: deko | last post by:
<head> <meta http-equiv="content-type" content="text/html;charset=iso-8859-1"> <title>Home Page</title> <!--> <style#navlist a { height: 1em; } </style> <!--> </head> Is the statement above...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.