473,406 Members | 2,620 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,406 software developers and data experts.

CSS versus HTML tables

I wanted to spiff up my overly spartan homepage, and started using some CSS
templates I found on a couple of weblogs. It looks fine in my browser (IE
6.0), but it doesn't print right. I tested the blogs, and one definitely
didn't print right.

Surveying the web, my impression is that CSS is very unreliable, because
even updated browsers fail to implement the standards correctly.

So should one just avoid CSS? Or is it OK if used carefully (maybe e.g. us
HTML tables for "global" layout, and CSS for things like font
specifications).
Jul 20 '05
81 5063
Eric B. Bednarz <be*****@fahr-zur-hoelle.org> wrote:
"Alan J. Flavell" <fl*****@ph.gla.ac.uk> writes:
On Tue, 17 Feb 2004, DU wrote:

The doctype decl. is important for other purposes: to trigger well above
80% of all browsers (MSIE 6, all Mozilla-based browsers, Opera 7.x) in
use out there into standards compliant rendering mode


A disgusting hack that gets no better with the lapse of time,


It contradicts standards, punishes people who *exactly* know what they
are doing and in turn rewards the clueless. We call that the realm of
common sense over here (you also don't need to have nails to use a
hammer, because hammers are important for other purposes, like smashing
heads; since this obviously works very well, it must be considered a
good thing).
...
Doubling the effort for no avail seems to be the primary
motivation of web (so-called) standards evangelism.


That's unfair, the evangelists serve a "higher goal" than us mere
mortals.

"Given the higher goal of infiltrating the market with a standards
compliant browser, and the requirement for mass market acceptance to
reach that goal, and the expectation in the market that browsers will
keep rendering existing pages as they always have done, I believe that
it is sensible to use the DOCTYPE (or lack thereof) to decide whether
or
not to render pages in a strictly compliant way or a backwards
compatible way, yes."

Ian Hickson, Mozilla apologist, CSS 2.1 editor, etc. wrote that on
26th July 2000. "The end justifies the means," is what he means. And
he was quite clear about the end - infiltrating the market with
bloatzilla.
Jul 20 '05 #51
*Karl Smith*:

There is no equivalent of the DOCTYPE declaration for a CSS file -
no "@level CSS1" or "@level CSS2".


There is (or likely will be) the property 'box-sizing' to switch between
CSS1+2 and IE4+5 box model. Of course that doesn't affect the other parser
changes the Doctype Declaration Switch imposes, but for most cases it's the
most or only important one.

--
"We know it's summer when the rain's a wee bit warmer."
cab driver in Glasgow
Jul 20 '05 #52
Barry Pearson wrote:
Leonard Blaisdell wrote:


[snip validation comments]
<http://www.benmeadowcroft.com/me/archive/2003/january.shtml#link25th>
puts it all together along with a link to the original thesis.


Thanks. 125 pages to read! Ouch.


The essentials relating to validation of www are contained within only a
handful of pages. Well worth reading imo. Regarding pages without a
DTD as of the latest transitional HTML type, about 2.5% validate, rather
than only .7%.

--
Michael
m r o z a t u k g a t e w a y d o t n e t
Jul 20 '05 #53
Michael Rozdoba wrote:
Barry Pearson wrote:
Leonard Blaisdell wrote:

[snip validation comments]
<http://www.benmeadowcroft.com/me/archive/2003/january.shtml#link25th>
puts it all together along with a link to the original thesis.


Thanks. 125 pages to read! Ouch.


The essentials relating to validation of www are contained within
only a handful of pages. Well worth reading imo. Regarding pages
without a
DTD as of the latest transitional HTML type, about 2.5% validate,
rather than only .7%.


Gosh, in fact as much as 2.58%! Near perfection. (Thanks).

This looks like a valuable contribution. I'm trying to form a view of what
things will be like in (say) 10 years time, or even 20, with new browsers &
still lots of invalid web pages. I can't spot the motivations for authors in
general (rather the sort who post here) to change. Will things only change
when authoring tools improve?

(I've already said elsewhere that I write 4.01 Strict now for workflow
reasons, not to make my pages more likely to work "out there". That thesis is
almost a catalogue of "what you can get away with"!)

--
Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
http://www.ChildSupportAnalysis.co.uk/
Jul 20 '05 #54
In article <40***********************@lovejoy.zen.co.uk>,
Michael Rozdoba <mr**@nowhere.invalid> writes:
handful of pages. Well worth reading imo. Regarding pages without a
DTD as of the latest transitional HTML type, about 2.5% validate, rather
than only .7%.


Perhaps also worth noting that awareness of (X)HTML and validation
seems to be gradually improving, so I'd expect the figure today to
be higher rather then lower than in that survey.

Of course, you could get a good deal of variability by simply
varying the sampling method. Spidering would not be good for this,
because it'll give "clusters" of similar pages. Likewise sampling
big-name commercial sites is not representative, because they have
such a big legacy of "millenium" crap.

--
Nick Kew
Jul 20 '05 #55
Nick Kew wrote:
In article <40***********************@lovejoy.zen.co.uk>,
[snip]
Perhaps also worth noting that awareness of (X)HTML and validation
seems to be gradually improving, so I'd expect the figure today to
be higher rather then lower than in that survey.

Of course, you could get a good deal of variability by simply
varying the sampling method. Spidering would not be good for this,
because it'll give "clusters" of similar pages. Likewise sampling
big-name commercial sites is not representative, because they have
such a big legacy of "millenium" crap.


When considering how to collect this info, the thesis author listed as
an option using a customised browser to do the validation, but dismissed
this due to time constraints, needing to reach a wide browser user base
& as users would not be happy with this going on behind their backs.

However, if it were an opt-in feature, within a browser such as Firefox,
maybe via a plugin, if that's possible, with encouragement to use the
option, it being for the long term greater good, maybe it could be
workable? Processing could be at a user configurable level, by only
checking every 1 in n pages visited, with weekly or monthly reporting
back to the central database.

It would have the great advantage of providing continuous updates to the
collected information & could make many other assessments, such as
proliferation & uptake of certain features such as table vs css layout,
which would be of use to browser authors.

Does this seem at all feasible? I wonder if any of the developers would
consider the idea.

--
Michael
m r o z a t u k g a t e w a y d o t n e t
Jul 20 '05 #56
Barry Pearson wrote:

[validation]
Gosh, in fact as much as 2.58%! Near perfection. (Thanks).
<g> At least it's over 300% of 0.7% & you probably wouldn't object to a
200% pay rise.

[future]
Will things only change when authoring tools improve?
A very interesting question. I imagine it's impossible to even guess at
an answer without much data on how web content has altered in the past
in response to standards.
(I've already said elsewhere that I write 4.01 Strict now for
workflow reasons, not to make my pages more likely to work "out
there". That thesis is almost a catalogue of "what you can get away
with"!)


Indeed. Given the motivation was to present guidelines on how browser
authors can adopt a structured approach to parsing broken html, it does
leave one feeling rather down.

--
Michael
m r o z a t u k g a t e w a y d o t n e t
Jul 20 '05 #57
In article <40***********************@lovejoy.zen.co.uk>,
Michael Rozdoba <mr**@nowhere.invalid> writes:
Nick Kew wrote:
In article <40***********************@lovejoy.zen.co.uk>,
[snip]
Perhaps also worth noting that awareness of (X)HTML and validation
seems to be gradually improving, so I'd expect the figure today to
be higher rather then lower than in that survey.

Of course, you could get a good deal of variability by simply
varying the sampling method. Spidering would not be good for this,
because it'll give "clusters" of similar pages. Likewise sampling
big-name commercial sites is not representative, because they have
such a big legacy of "millenium" crap.


When considering how to collect this info, the thesis author listed as
an option using a customised browser to do the validation, but dismissed
this due to time constraints, needing to reach a wide browser user base
& as users would not be happy with this going on behind their backs.


Interesting you should mention that: there are a few people working right
now on adding validation (fully local, not using a web-based svc) to MSIE.
It appears as an additional bar at the top of the browser, and could
presumably be extended to collect stats.

When your post reaches google I'll point Bjoern at it and see if he thinks
this looks like an extension he might be interested to pursue.
Does this seem at all feasible? I wonder if any of the developers would
consider the idea.


I don't mind helping out with a validation component the opensource browsers
could incorporate, but I don't have the time or the hardware to start hacking
browser code itself.

--
Nick Kew
Jul 20 '05 #58
Michael Rozdoba wrote:
Barry Pearson wrote:

[snip]
(I've already said elsewhere that I write 4.01 Strict now for
workflow reasons, not to make my pages more likely to work "out
there". That thesis is almost a catalogue of "what you can get away
with"!)


Indeed. Given the motivation was to present guidelines on how browser
authors can adopt a structured approach to parsing broken html, it
does leave one feeling rather down.


At the risk of triggering apoplexy all round, ponder what would happen if
browsers adopted the proposals in a fairly consistent way. Suppose that (say)
80% of web pages can be handled by that set of proposals in a consistent way.

In effect, this would define a new de facto standard, "HTML 4.01 Repairable",
which 80% of web sites would comply with.

Gosh - widespread standards-compliance! Just not a de jure standard - unless
it got recommended formally.

--
Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
http://www.ChildSupportAnalysis.co.uk/
Jul 20 '05 #59
Michael Rozdoba wrote:
[snip]
It would have the great advantage of providing continuous updates to
the collected information & could make many other assessments, such as
proliferation & uptake of certain features such as table vs css
layout, which would be of use to browser authors.

[snip]

A technique for part of that (table layout) that I used for months, and still
do sometimes, was to browse (IE) with a local CSS containing:
table { border: 1px dotted blue; }

Obviously that isn't nearly as good as having proper statistics, but variants
on this can be useful for personal reasons. (And my crude estimate is about
99% of the pages I saw used them!)

--
Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
http://www.ChildSupportAnalysis.co.uk/
Jul 20 '05 #60
Tim
Michael Rozdoba wrote:
Indeed. Given the motivation was to present guidelines on how browser
authors can adopt a structured approach to parsing broken html, it
does leave one feeling rather down.


"Barry Pearson" <ne**@childsupportanalysis.co.uk> wrote:
At the risk of triggering apoplexy all round, ponder what would happen if
browsers adopted the proposals in a fairly consistent way. Suppose that (say)
80% of web pages can be handled by that set of proposals in a consistent way.

In effect, this would define a new de facto standard, "HTML 4.01 Repairable",
which 80% of web sites would comply with.

Gosh - widespread standards-compliance! Just not a de jure standard - unless
it got recommended formally.


You don't work for Microsoft, by any chance? (Let's redesign the specs
to suit how our software behaves, rather than redesign our broken
software to adhere to the specs.)

A fantastic improvement to web browsers would be a refusal to display
broken HTML, instead displaying a full-screen, bright red, FAILED
response. Then a few idiot web authors might *instantly* find out that
they've got broken HTML, and fix it. They'd have to if they knew that
nobody was going to get to see their broken HTML.

--
My "from" address is totally fake. The reply-to address is real, but
may be only temporary. Reply to usenet postings in the same place as
you read the message you're replying to.

This message was sent without a virus, please delete some files yourself.
Jul 20 '05 #61
Tim wrote:
[snip]
"Barry Pearson" <ne**@childsupportanalysis.co.uk> wrote:
At the risk of triggering apoplexy all round, ponder what would
happen if browsers adopted the proposals in a fairly consistent way.
Suppose that (say) 80% of web pages can be handled by that set of
proposals in a consistent way.

In effect, this would define a new de facto standard, "HTML 4.01
Repairable", which 80% of web sites would comply with.

Gosh - widespread standards-compliance! Just not a de jure standard
- unless it got recommended formally.
You don't work for Microsoft, by any chance? (Let's redesign the
specs to suit how our software behaves, rather than redesign our
broken software to adhere to the specs.)


I thought there was a risk of apoplexy! Stay calm. (And it was an IBM
principle before that: "it isn't a bug, it's a feature").

No - I used to work for a different company, in the architecture group, and we
were thought of as fanatical ruthless assassins, who would wipe out a design
just with the phrase "it isn't architectural". But that attitude had its
place - we were developing a secure mainframe system that needed the potential
to evolve for decades. We were doing so in an environment where we could (at
least in theory) control such things.

But the web isn't like that. There is no plausible way to "redesign our broken
pages to adhere to the specs". They are out there, and they won't go away.
More are added every day - perhaps between 100,000 & 1 million (or perhaps
lots more?) invalid pages every day. And there is no reason at the moment for
that to stop.

There is actually nothing wrong with the principle of changing the spec. to
match practice. For example, parts of CSS2.1 appear to be exactly like that -
making parts of CSS match what browsers do.
A fantastic improvement to web browsers would be a refusal to display
broken HTML, instead displaying a full-screen, bright red, FAILED
response. Then a few idiot web authors might *instantly* find out
that they've got broken HTML, and fix it. They'd have to if they
knew that nobody was going to get to see their broken HTML.


Any browser that tried that would not be used, except by a few purists! Why
would people who want to browse the web ever consider such a browser? They
want to access whatever they are after without fuss. I now write valid 4.01
Strict, and would like a browser that would easily validate my pages while
there were still on my PC. But I *use* the web with a tolerant browser that
most authors have checked their pages against - IE 6. All I really want is to
see the pages as the author intended, which probably means as IE 6 renders
them.

--
Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
http://www.ChildSupportAnalysis.co.uk/
Jul 20 '05 #62
Barry Pearson wrote:

[snip]
A technique for part of that (table layout) that I used for months, and still
do sometimes, was to browse (IE) with a local CSS containing:
table { border: 1px dotted blue; }

Obviously that isn't nearly as good as having proper statistics, but variants
on this can be useful for personal reasons. (And my crude estimate is about
99% of the pages I saw used them!)


Quite believable. I'm sure they were all displaying tabular data too...
And yes, I have read the previous long threads on such discussions which
you're so fond of ;)

--
Michael
m r o z a t u k g a t e w a y d o t n e t
Jul 20 '05 #63
Nick Kew wrote:
In article <40***********************@lovejoy.zen.co.uk>, Interesting you should mention that: there are a few people working right
now on adding validation (fully local, not using a web-based svc) to MSIE.
It appears as an additional bar at the top of the browser, and could
presumably be extended to collect stats.

When your post reaches google I'll point Bjoern at it and see if he thinks
this looks like an extension he might be interested to pursue.


No harm in raising the idea.
Does this seem at all feasible? I wonder if any of the developers would
consider the idea.


I don't mind helping out with a validation component the opensource browsers
could incorporate, but I don't have the time or the hardware to start hacking
browser code itself.


I don't know enough about sampling, but to get relevant stats, one would
need many users to ensure a representative cross section of usage, if
the stats are to be indicative of general web content. Other issues
would need to be addressed - the scheme obviously needs to be at the
choice of the user, but one really needs to persuade potential users to
leave the option on long term; privacy issues would need to be
convincingly addressed.

It would be best if this could be integrated into a mainstream browser
release for all of these reasons, however your suggestion would be an
interesting test case.

Any ideas on how to persuade users to take up such a feature? Maybe
teams with rankings according to how many pages processed, ala
distributed computing projects?

As another approach to the design problem, though this is getting away
from the above, one could adopt a web proxy system, which would give a
system which was compatible with any browser.

--
Michael
m r o z a t u k g a t e w a y d o t n e t
Jul 20 '05 #64
In article <40***********************@lovejoy.zen.co.uk>,
Michael Rozdoba <mr**@nowhere.invalid> writes:
When your post reaches google I'll point Bjoern at it and see if he thinks
this looks like an extension he might be interested to pursue.

Now done.
No harm in raising the idea.
http://sourceforge.net/projects/ieqabar
I don't know enough about sampling, but to get relevant stats, one would
need many users to ensure a representative cross section of usage, if
the stats are to be indicative of general web content. Other issues
Indeed - I'm now planning to raise this at the next scheduled meeting
of W3C/qa-dev.
As another approach to the design problem, though this is getting away
from the above, one could adopt a web proxy system, which would give a
system which was compatible with any browser.


That's close to another idea that's been discussed (in the context
of accessibility).

Are you expressing a "would-be-nice" wish, or might you be interested
to participate actively in setting up such a study? Feel free to email
me privately if you'd care to discuss getting involved in a W3C context.
(Disclaimer: I don't work for W3C, let alone speak for them or invite
third parties to participate in working groups. I just think the
ideas you're putting forward have potential to make you welcome).

--
Nick Kew
Jul 20 '05 #65
Barry Pearson wrote:
Tim wrote:
Barry Pearson <ne**@childsupportanalysis.co.uk> wrote:
At the risk of triggering apoplexy all round, ponder what would
happen if browsers adopted the proposals in a fairly
consistent way.

In effect, this would define a new de facto standard, "HTML
4.01 Repairable", which 80% of web sites would comply with.


You don't work for Microsoft, by any chance? (Let's redesign the
specs to suit how our software behaves, rather than redesign our
broken software to adhere to the specs.)


I thought there was a risk of apoplexy! Stay calm.


Except that Tim's response was not apoplectic, and was perfectly calm.
You aren't trolling for a fight, are you?

--
Brian (follow directions in my address to email me)
http://www.tsmchughs.com/

Jul 20 '05 #66
Nick Kew wrote:

[snip]
Are you expressing a "would-be-nice" wish, or might you be interested
to participate actively in setting up such a study? Feel free to email
me privately if you'd care to discuss getting involved in a W3C context.
(Disclaimer: I don't work for W3C, let alone speak for them or invite
third parties to participate in working groups. I just think the
ideas you're putting forward have potential to make you welcome).


Taken to email.

--
Michael
m r o z a t u k g a t e w a y d o t n e t
Jul 20 '05 #67
(Nick Kew in comp.infosystems.www.authoring.stylesheets)
In article <40***********************@lovejoy.zen.co.uk>,
Michael Rozdoba <mr**@nowhere.invalid> writes:
When considering how to collect this info, the thesis author listed as
an option using a customised browser to do the validation, but dismissed
this due to time constraints, needing to reach a wide browser user base
& as users would not be happy with this going on behind their backs.


Interesting you should mention that: there are a few people working right
now on adding validation (fully local, not using a web-based svc) to MSIE.
It appears as an additional bar at the top of the browser, and could
presumably be extended to collect stats.


Or they could make it a plugin to Frontpage.

Tilman
--
Der statistische Tote ist dir eal. Der stochastische Tote bist du selber.
Jul 20 '05 #68
DU
Matthias Gutfeldt wrote:
DU wrote:

[snipped]
For validation purposes there's no need to include the Doctype. The W3C
validator has a Doctype override feature.

I'm sorry. I do not understand... You say the W3C validator has a
Doctype override feature? But this is not an automatic feature like the
WDG validator. You manually have to select a doctype decl. in case there
is no doctype declaration.

The doctype decl. is important for other purposes: to trigger well
above 80% of all browsers (MSIE 6, all Mozilla-based browsers, Opera
7.x) in use out there into standards compliant rendering mode
(preferable for tons of reasons) or not. No doctype decl.
automatically means your webpage will always be using a backward
compatible rendering mode where errors are deliberately "corrected",
tolerated and dealt with.

The Doctype switch is an abomination.
Matthias


Well, abomination is a strong word. If Microsoft drops entirely this
doctype trigger in MSIE 7 and only renders documents according to W3C
web standards, then this doctype trigger will have served its purpose
with MSIE 6: to give time to people to upgrade their markup code.

DU
Jul 20 '05 #69
Tim
"Barry Pearson" <ne**@childsupportanalysis.co.uk> wrote:
At the risk of triggering apoplexy all round, ponder what would
happen if browsers adopted the proposals in a fairly consistent way.
Suppose that (say) 80% of web pages can be handled by that set of
proposals in a consistent way.

In effect, this would define a new de facto standard, "HTML 4.01
Repairable", which 80% of web sites would comply with.


Tim wrote:
You don't work for Microsoft, by any chance? (Let's redesign the
specs to suit how our software behaves, rather than redesign our
broken software to adhere to the specs.)

"Barry Pearson" <ne**@childsupportanalysis.co.uk> wrote:
I thought there was a risk of apoplexy! Stay calm. (And it was an IBM
principle before that: "it isn't a bug, it's a feature").

No - I used to work for a different company, ...[snip]...

But the web isn't like that. There is no plausible way to "redesign our broken
pages to adhere to the specs". They are out there, and they won't go away.
More are added every day - perhaps between 100,000 & 1 million (or perhaps
lots more?) invalid pages every day. And there is no reason at the moment for
that to stop.
Only because software is too tolerant. We won't get new correct pages,
if the browsers don't insist on it. We've got a new standard emerging,
XHTML, which is supposed to be correct or fail absolutely. If that
actually happened, all would be well. But no, we're getting hacks and
work-arounds, already (the clueless are demanding things be broken to
accommodate them). It could be done, as it's a new standard, it doesn't
have to accommodate old non-compliant software, it has its fresh start.
We have the ability to say this is XHTML treat it as such, and this is
HTML treat it as such (differently than each other).
There is actually nothing wrong with the principle of changing the spec. to
match practice. For example, parts of CSS2.1 appear to be exactly like that -
making parts of CSS match what browsers do.
There is. All those that have designed something to the spec (data or
programs), doing it properly, suddenly have had their designs ruined.

Those changes to CSS are an example of what's bad about such changes.
CSS goes even one step worse as not having any way to define whether its
CSS 1 or 2. At least HTML has the doctype, which could be intelligently
used to handle pages (though it never is).

How can I design something without a target? How can I make something
work if the target keeps changing?
A fantastic improvement to web browsers would be a refusal to display
broken HTML, instead displaying a full-screen, bright red, FAILED
response. Then a few idiot web authors might *instantly* find out
that they've got broken HTML, and fix it. They'd have to if they
knew that nobody was going to get to see their broken HTML.

Any browser that tried that would not be used, except by a few purists! Why
would people who want to browse the web ever consider such a browser? They
want to access whatever they are after without fuss. I now write valid 4.01
Strict, and would like a browser that would easily validate my pages while
there were still on my PC. But I *use* the web with a tolerant browser that
most authors have checked their pages against - IE 6. All I really want is to
see the pages as the author intended, which probably means as IE 6 renders
them.


You've written something to a specification, yet feel that it'd be okay
for that specification to be changed? ;-) (According to your prior
comments.)

I still say it'd be a fantastic improvement to web browsers (emphasis on
the plural, i.e. all of them). If they all failed to render broken
pages, there'd be very few broken pages. Just the same as software
which crashes and doesn't do its job, doesn't become widely accepted
(though somehow Microsoft manages to evade natural selection). ;-)

--
My "from" address is totally fake. The reply-to address is real, but
may be only temporary. Reply to usenet postings in the same place as
you read the message you're replying to.

This message was sent without a virus, please delete some files yourself.
Jul 20 '05 #70
On Sun, 22 Feb 2004, Tim wrote:
But the web isn't like that. There is no plausible way to
"redesign our broken pages to adhere to the specs". They are out
there, and they won't go away. More are added every day - perhaps
between 100,000 & 1 million (or perhaps lots more?) invalid pages
every day. And there is no reason at the moment for that to stop.
Only because software is too tolerant. We won't get new correct pages,
if the browsers don't insist on it.


Unfortunately, we won't get new intolerant browsers, because their
developers know (or at least believe) that their users will compare
them unfavourably to MSIE (which doesn't even rate as a WWW browser).
We've got a new standard emerging, XHTML, which is supposed to be
correct or fail absolutely. If that actually happened, all would be
well. But no, we're getting hacks and work-arounds, already (the
clueless are demanding things be broken to accommodate them).
Welcome to Appendix C, from the W3C. Who's clueless now?
It could be done, as it's a new standard, it doesn't
have to accommodate old non-compliant software, it has its fresh start.
That was the original plan: the W3C even trademarked XHTML to give
the opportunity to take action against those who abused the privilege:
but the W3C seems to have got cold feet, and gave us Appendix C.
Which in turn gave us XHTML-flavoured tag soup.
We have the ability to say this is XHTML treat it as such, and this is
HTML treat it as such (differently than each other).
And very few browsers which "treat it as such".
There is actually nothing wrong with the principle of changing the
spec. to match practice. For example, parts of CSS2.1 appear to be
exactly like that - making parts of CSS match what browsers do.


Not so very different from HTML3.2(spit), eh? Or am I one of the few
around here who actually remembers that disappointment?
There is. All those that have designed something to the spec (data or
programs), doing it properly, suddenly have had their designs ruined.

Those changes to CSS are an example of what's bad about such changes.
CSS goes even one step worse as not having any way to define whether its
CSS 1 or 2. At least HTML has the doctype, which could be intelligently
used to handle pages (though it never is).

How can I design something without a target? How can I make something
work if the target keeps changing?
I was hoping for an improvement to font-size-adjust. Instead they
took it away altogether. Wibble.
I still say it'd be a fantastic improvement to web browsers (emphasis on
the plural, i.e. all of them). If they all failed to render broken
pages, there'd be very few broken pages.
But unless/until the great unwashed wake up to the fact that their
favourite operating system component isn't a WWW browser (which isn't
very likely: if the DoJ effectively conceded defeat, what hope does an
individual have?), that agenda isn't going to come to pass.
Just the same as software which crashes and doesn't do its job,
doesn't become widely accepted
That'll be why everyone out there is running linux, right? (wrong,
unfortunately). Experience shows that in a head to head between
technical competence and marketing, marketing wins every time.
(though somehow Microsoft manages to evade natural selection). ;-)


You see? Despite UK accessibility legislation to the contrary, some
UK banks are still apparently telling their customers that WWW
browsers are not supported - they must use the operating system
component instead.
Jul 20 '05 #71
Tim wrote:
[snip]
"Barry Pearson" <ne**@childsupportanalysis.co.uk> wrote: [snip]
But the web isn't like that. There is no plausible way to "redesign
our broken pages to adhere to the specs". They are out there, and
they won't go away. More are added every day - perhaps between
100,000 & 1 million (or perhaps lots more?) invalid pages every day.
And there is no reason at the moment for that to stop.


Only because software is too tolerant. We won't get new correct
pages, if the browsers don't insist on it. We've got a new standard
emerging, XHTML, which is supposed to be correct or fail absolutely.
If that actually happened, all would be well. But no, we're getting
hacks and work-arounds, already (the clueless are demanding things be
broken to accommodate them). It could be done, as it's a new
standard, it doesn't have to accommodate old non-compliant software,
it has its fresh start. We have the ability to say this is XHTML
treat it as such, and this is HTML treat it as such (differently than
each other).


The web *isn't* too tolerant, by the desires of most of the people who publish
and use it and otherwise help to pay for it.
There is actually nothing wrong with the principle of changing the
spec. to match practice. For example, parts of CSS2.1 appear to be
exactly like that - making parts of CSS match what browsers do.


There is. All those that have designed something to the spec (data or
programs), doing it properly, suddenly have had their designs ruined.


I said "... making parts of CSS match what browsers do". How does that ruin
someone's design? What happens to those designs in CSS2?

What this is doing is helping to turn a de facto standard into a de jure
standard. Sometimes, that can be a productive way of building standards.
Those changes to CSS are an example of what's bad about such changes.
CSS goes even one step worse as not having any way to define whether
its CSS 1 or 2. At least HTML has the doctype, which could be
intelligently used to handle pages (though it never is).
I agree with that point. I'm uncomfortable with the idea of material, written
to a standard, that doesn't identify the standard. If it is guaranteed to
remain forward compatible, so a CSS written to (say) CSS1 looks *and* behaves
precisely like a subset of one written to CSS2 and CSS2.1, then OK. But that
doesn't appear to be the case.

[snip]
Any browser that tried that would not be used, except by a few
purists! Why would people who want to browse the web ever consider
such a browser? They want to access whatever they are after without
fuss. I now write valid 4.01 Strict, and would like a browser that
would easily validate my pages while there were still on my PC. But
I *use* the web with a tolerant browser that most authors have
checked their pages against - IE 6. All I really want is to see the
pages as the author intended, which probably means as IE 6 renders
them.


You've written something to a specification, yet feel that it'd be
okay for that specification to be changed? ;-) (According to your
prior comments.)


I wouldn't like 4.01 to change (and certainly not if previously valid pages
ceased to be valid!) But I see no objection in principle to having a 4.02,
with its own DOCTYPE.

I want browsers to render a valid 4.01 document according to the formatting
recommendations in the W3C material, for reasons similar to yours - I want my
design handled properly. But I want to use a tolerant browser when accessing
invalid material (whether because it hasn't a DOCTYPE or just doesn't conform
to the DTD).
I still say it'd be a fantastic improvement to web browsers (emphasis
on the plural, i.e. all of them). If they all failed to render broken
pages, there'd be very few broken pages. Just the same as software
which crashes and doesn't do its job, doesn't become widely accepted
(though somehow Microsoft manages to evade natural selection). ;-)


I use a lot of MS software, because on the whole it does the job I want. Yes,
W2000 & IE sometimes crash. (Although not nearly as often for me as claims I
hear - I would normally expect them to stay up for several days, perhaps a
week or two, at a time). But they typically do what I bought them for.

--
Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
http://www.ChildSupportAnalysis.co.uk/
Jul 20 '05 #72
Alan J. Flavell wrote:
Barry Pearson wrote ... [snip]
> There is actually nothing wrong with the principle of changing the
> spec. to match practice. For example, parts of CSS2.1 appear to be
> exactly like that - making parts of CSS match what browsers do.


Not so very different from HTML3.2(spit), eh? Or am I one of the few
around here who actually remembers that disappointment?

[snip]

With the benefit of hind-sight, it appears that the pioneers of the web
created a free market-place, for new ideas & new features, that then ran out
of their control.

What else could/should they have done instead of HTML 3.2? I suspect that its
seeds were sown a few years earlier. Either by (somehow) not allowing
uncontrolled development, or by dramatically reducing the need for all the
additional features that became popular and had to be accommodated.

--
Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
http://www.ChildSupportAnalysis.co.uk/
Jul 20 '05 #73
Barry Pearson wrote:

The web *isn't* too tolerant, by the desires of most of the people
who publish and use it and otherwise help to pay for it.


I regard IE as a seriously flawed browser, especially after testing its
ability to handle content negotiation. (Google ciwa-site-design for the
gory details.) I was forced to remove what might other be a useful
feature because of IE/Win. This is a disservice to everyone.

--
Brian (remove "invalid" from my address to email me)
http://www.tsmchughs.com/
Jul 20 '05 #74
Barry Pearson wrote:
Alan J. Flavell wrote:
Barry Pearson wrote ... There is actually nothing wrong with the principle of changing
the spec. to match practice.
Not so very different from HTML3.2(spit), eh? Or am I one of the
few around here who actually remembers that disappointment?


it appears that the pioneers of the web created a free market-place,
for new ideas & new features, that then ran out of their control.


It appears to me that they caved to the marketing desires of Netscape
et. al., which produced a new de facto standard via "extensions."
(Anyone familiar with the history of color television standards in the
U.S. will spot a familiar pattern.)
What else could/should they have done instead of HTML 3.2?


They should have rejected those extenstions, since they were not
practical. They should have released HTML 3.0. And they should have
moved ahead with stylesheets, which had already been proposed.

--
Brian (remove "invalid" from my address to email me)
http://www.tsmchughs.com/
Jul 20 '05 #75
Brian wrote:
Barry Pearson wrote:

The web *isn't* too tolerant, by the desires of most of the people
who publish and use it and otherwise help to pay for it.


I regard IE as a seriously flawed browser, especially after testing
its ability to handle content negotiation. (Google ciwa-site-design
for the gory details.) I was forced to remove what might other be a
useful feature because of IE/Win. This is a disservice to everyone.


Of course it is a seriously flawed browser! And of course that is a
disservice. Quite apart from the cases where it doesn't implement certain CSS
properties, and gets others wrong, I've wasted hours (more like days) over the
peekaboo & guillotine & other bugs until I found discussions about them.
(Other browsers *also* have their faults).

But tolerance is a service when browsing dodgy sites. Users typically want to
get at the content, in pretty much the way the publisher/author intended it.
While I would use an intolerant browser to test my own pages, I would not use
one for browsing the web. And I'm sure most surfers wouldn't either - why
should they handicap themselves?

--
Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
http://www.ChildSupportAnalysis.co.uk/
Jul 20 '05 #76
Brian wrote:
Barry Pearson wrote:
Alan J. Flavell wrote:
Barry Pearson wrote ...

> There is actually nothing wrong with the principle of changing
> the spec. to match practice.

Not so very different from HTML3.2(spit), eh? Or am I one of the
few around here who actually remembers that disappointment?


it appears that the pioneers of the web created a free market-place,
for new ideas & new features, that then ran out of their control.


It appears to me that they caved to the marketing desires of Netscape
et. al., which produced a new de facto standard via "extensions."
(Anyone familiar with the history of color television standards in the
U.S. will spot a familiar pattern.)


But those extensions were "out there" and being used. Would use have ceased if
W3C had published a tighter recommendation? Have people stopped using HTML 3.2
features that are not in HTML 4.01? (That is a serious question - I'm not sure
what those features are). Certainly they haven't still stopped using
deprecated features even where they are handled well by browsers using CSS1
properties. And CSS1 came out in 1996.

Imagine some extension that NN provided. Which would be worst - for W3C to put
it into a recommendation, making it "respectable"; or to omit it, and risk IE
and others implementing alternative extensions for the same want? W3C didn't
have control in that sense - they had to try to stop a free market
degenerating into unsupportable anarchy of the "this site works with XX
browser" sort. The situation that I am more familiar with, of being able to
exercise tight design control over specifications & their use, can't translate
to the web.
What else could/should they have done instead of HTML 3.2?


They should have rejected those extenstions, since they were not
practical. They should have released HTML 3.0. And they should have
moved ahead with stylesheets, which had already been proposed.


By " not practical", do you mean "can't work", or "doesn't fit the strategy"?
Surely they worked, or else they wouldn't be there.

I wonder if the contrast with TCP/IP is relevant? My understanding (no doubt I
will be corrected!) is that DARPA (initially ARPA) not only devised the
architecture and the protocol standards, but they actually built large-scale
demonstrations of them (ARPANET) that were usable in practice. So, by the time
other organisations could freely start to innovate, there were practical
constraints about the form of the innovation. You had to inter-work with
something that existed. But you could still add extensions for things that
were not there originally, such as application-layer features, 2-phase commit,
etc. And there was therefore scope for incompatible 2-phase commit standards,
even though 2-phase commit is an obvious area where interworking is needed.

The web pioneers didn't attempt to achieve this level of standardisation,
rightly or wrongly. They didn't deliver sufficient technology to constrain the
free market, nor use such things as patents to prevent it. So they ended up
creating a free market with no mechanisms to keep it within strategic
boundaries.

(I don't know what is in HTML 3.2 that wasn't in HTML 3.0. <font>? I agree
that earlier recommendations for CSS would have been useful, but in fact there
wasn't that much difference in time. For example, had 3.0 become a
recommendation, that would have been either late 1995 or during 1996. And CSS1
became a recommendation in December 1996. Both NN and IE began support of CSS1
in 1997, before HTML 4 became a recommendation - they appear to have "jumped
the gun").

--
Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
http://www.ChildSupportAnalysis.co.uk/
Jul 20 '05 #77
"Barry Pearson" <ne**@childsupportanalysis.co.uk> wrote:
But those extensions were "out there" and being used. Would use have ceased if
W3C had published a tighter recommendation? Have people stopped using HTML 3.2
features that are not in HTML 4.01? (That is a serious question - I'm not sure
what those features are).
Features in HTML 3.2 but not in 4.0 Transitonal? I don't think there
are any. Except for XMP, LISTING and PLAINTEXT which are deprecated in
HTML 3.2 and obsolete in 4.x.
(I don't know what is in HTML 3.2 that wasn't in HTML 3.0. <font>?
As far as elements go: FONT, BASEFONT and APPLET. Plus a lot of
attributes.
I agree
that earlier recommendations for CSS would have been useful, but in fact there
wasn't that much difference in time.
If development had continued along the HTML 3.0 path rather than the
HTML 3.2/HTML 4 path then any development of a stylesheet language
might have been very different to what we know as CSS.

And CSS was not intended to be the "one and only" stylesheet language.
Stylesheet languages already existed for other SGML applications,
applying some of them (maybe in simplified forms) to HTML would not
have been impossible.
For example, had 3.0 become a
recommendation, that would have been either late 1995 or during 1996. And CSS1
became a recommendation in December 1996. Both NN and IE began support of CSS1
in 1997, before HTML 4 became a recommendation - they appear to have "jumped
the gun").


HTML 4 is not a pre-requisite for stylesheets, not even a
pre-requisite for CSS.

The STYLE element was included in HTML 3.0 and 3.2 (in the latter case
really just as a placeholder).

The LINK element is in HTML 2.0, the specs for which even mentions
stylesheets in relation to it:
http://www.w3.org/MarkUp/html-spec/h....html#SEC5.2.4

HTML 3.0 included ID and CLASS attributes but HTML 3.2 did not. This
is part of the reason why people see 3.2 as setting back the switch to
stylesheets.

Steve

--
"My theories appal you, my heresies outrage you,
I never answer letters and you don't like my tie." - The Doctor

Steve Pugh <st***@pugh.net> <http://steve.pugh.net/>
Jul 20 '05 #78
Steve Pugh wrote:
"Barry Pearson" <ne**@childsupportanalysis.co.uk> wrote:

[snip]
For example, had 3.0 become a
recommendation, that would have been either late 1995 or during 1996.
And CSS1 became a recommendation in December 1996. Both NN and IE
began support of CSS1 in 1997, before HTML 4 became a recommendation
- they appear to have "jumped the gun").


HTML 4 is not a pre-requisite for stylesheets, not even a
pre-requisite for CSS.

The STYLE element was included in HTML 3.0 and 3.2 (in the latter case
really just as a placeholder).

The LINK element is in HTML 2.0, the specs for which even mentions
stylesheets in relation to it:
http://www.w3.org/MarkUp/html-spec/h....html#SEC5.2.4

HTML 3.0 included ID and CLASS attributes but HTML 3.2 did not. This
is part of the reason why people see 3.2 as setting back the switch to
stylesheets.


Thanks. I now understand some of the conflicting statements I've seen. Not
carrying forward ID & CLASS from 3.0 is weird. But I get the feeling there was
a clash between alligators & objectives. NN and IE, etc, being alligators.

(But if LINK was in 2.0, does that mean that, in effect, type / tag selectors
were effectively in 3.2? Not that they would have avoided the need for <font>
if CLASS wasn't in).

--
Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
http://www.ChildSupportAnalysis.co.uk/
Jul 20 '05 #79
On Mon, 23 Feb 2004 10:06:01 -0000, Barry Pearson
<ne**@childsupportanalysis.co.uk> wrote:
Brian wrote:

I regard IE as a seriously flawed browser..


Of course it is a seriously flawed browser!


And I regard a lot of user's eyesight as seriously flawed too. All part of
what we have to work with to make these silly little websites.
Jul 20 '05 #80
On Mon, 23 Feb 2004, Barry Pearson wrote:
Have people stopped using HTML 3.2
features that are not in HTML 4.01? (That is a serious question
"Serious", maybe, but asked in a rather loaded form, so I'll feel no
shame at offering a differently-loaded answer.

Yes: in the sense that a progressively increasing proportion of web
sites are doing their visual styling by means of CSS - the
presentational features of HTML/3.2(spit) are falling into disuse, or
retained only as far as seems appropriate for supporting older
browsers.

Sure, there are still vast numbers of web pages around (even
newly-built ones) that are designed according to that now-antiquated
specification: some people take a long time to get up to speed, or are
still using old web-page-extrusion software based on its principles.
But AFAICS the balance has already shifted, and - as people recognise
the benefits of stylesheet-based design, the development can only
accelerate.

But I still begrudge the wasted years in between.
- I'm not sure what those features are).
For the most part, the "features" of HTML/3.2 were retained in
HTML/4.01 transitional. But they were deprecated.
Imagine some extension that NN provided. Which would be worst - for
W3C to put it into a recommendation, making it "respectable"; or to
omit it, and risk IE and others implementing alternative extensions
for the same want?
What -actually- happened was that the W3C made a recommendation, and
the Big Two carried on implementing their own vendor-specific stuff
anyway.

The only difference nowadays is that Mozilla, Opera etc. are aiming at
the W3C specifications; while IE merely provide pointers to the W3C
specifications, while implementing something quite different (case in
point: the <button> element).
What else could/should they have done instead of HTML 3.2?


They should have rejected those extenstions, since they were not
practical. They should have released HTML 3.0.
Well, a modified nod to that - HTML3.0 had some good stuff in it,
which it was a pity to lose; but also there was quite a bit of
presentational detail that would have found a better home in the
stylesheets. And of course HTML3.0 needed RFC2070 before it could
form the base of a truly "W"WW.
And they should have
moved ahead with stylesheets, which had already been proposed.


Indeed.
By " not practical", do you mean "can't work", or "doesn't fit the
strategy"? Surely they worked, or else they wouldn't be there.


There's a difference between satisfying the immediate itch, and
proving to be a well-engineered solution to an underlying requirement.

Vendor extensions (with a few honourable exceptions[1] - think of the
Spyglass client-side imagemap RFC for a positive example) have proved
for the most part to have been slapped together to wow the
suggestible, rather than being carefully thought out as an engineering
solution. Marquee, anyone?

In fact, over a period of quite some years, as each new browser
version came out, it was evident that the vendor was trying to enlist
web authors to incorporate non-interworkable features into their web
sites, in the hope of locking-in the users to a specific browser.
The browser developers weren't by any means trying to market a fuly
WWW-compatible product to the end users - rather, they were marketing
their "product" (their vendor extensions) to the information
providers, while giving the browsers away free in order to capture the
eyeballs for the information providers.

(Opera was the first company to make a high-profile attempt to reverse
that state of affairs, I think it's fair to say.)

The only way that the W3C could have conceivably dealt with that kind
of marketing dirty-tricks would be to trademark the terms HTML and/or
WWW, and take court action against those who abused them. But the W3C
is a consortium of its members, and it's hardly likely to cut off its
own nose to spite its face like that.

ttfn

[1] I'm not going to get dragged out on "tables" again, because they
weren't exclusively a vendor extension, but were a coming-together of
quite a number of different development threads and proposals.

Jul 20 '05 #81
Alan J. Flavell wrote:
On Mon, 23 Feb 2004, Barry Pearson wrote:
Have people stopped using HTML 3.2
features that are not in HTML 4.01? (That is a serious question
"Serious", maybe, but asked in a rather loaded form, so I'll feel no
shame at offering a differently-loaded answer.

Yes: in the sense that a progressively increasing proportion of web
sites are doing their visual styling by means of CSS - the
presentational features of HTML/3.2(spit) are falling into disuse, or
retained only as far as seems appropriate for supporting older
browsers.


I meant "not in any version of 4.01", but it appears they are still in
Transitional.
Sure, there are still vast numbers of web pages around (even
newly-built ones) that are designed according to that now-antiquated
specification: some people take a long time to get up to speed, or are
still using old web-page-extrusion software based on its principles.
But AFAICS the balance has already shifted, and - as people recognise
the benefits of stylesheet-based design, the development can only
accelerate.
The latest version of Dreamweaver has HTML presentation largely switched off
by default, and much better editing and integration of CSS. This is certainly
the future for such editors.
But I still begrudge the wasted years in between.
I'll come back to that below - I suspect alternatives would still have delayed
the explosion for years.

[snip]
Imagine some extension that NN provided. Which would be worst - for
W3C to put it into a recommendation, making it "respectable"; or to
omit it, and risk IE and others implementing alternative extensions
for the same want?


What -actually- happened was that the W3C made a recommendation, and
the Big Two carried on implementing their own vendor-specific stuff
anyway.


But wasn't the effect of 3.2 to cause them to converge ( or at least not
diverge further)? Otherwise how could it still be used effectively, and how
could 4.01 Transitional be used? (Did extra tags and attributes appear after
3.2 was published?)
The only difference nowadays is that Mozilla, Opera etc. are aiming at
the W3C specifications; while IE merely provide pointers to the W3C
specifications, while implementing something quite different (case in
point: the <button> element).
>> What else could/should they have done instead of HTML 3.2?
>
> They should have rejected those extenstions, since they were not
> practical. They should have released HTML 3.0.
Well, a modified nod to that - HTML3.0 had some good stuff in it,
which it was a pity to lose; but also there was quite a bit of
presentational detail that would have found a better home in the
stylesheets. And of course HTML3.0 needed RFC2070 before it could
form the base of a truly "W"WW.
And would that have stopped people developing web pages with those
presentation features in? The problem appears to be that once they are in the
wild, which appears to have been the case by that time, there was no need for
people to change significantly. There doesn't appear to have been a "stick"
that was effective, so what was left was "carrots". And they weren't
convincing enough. If you can do something with HTML, even though it isn't in
a recommendation published by someone you've never heard of, why switch to an
additional complication that inevitably lagged. (To stop the lag, CSS would
probably have had to be in the first browsers - and then the browser-wars
might have been based rather more on proprietary CSS features!)

The problem I see here is that these arguments about what should have happened
are a bit like saying "if only people would stop hating each other we would
have world peace". Yes - but human nature isn't like that.

[snip] By " not practical", do you mean "can't work", or "doesn't fit the
strategy"? Surely they worked, or else they wouldn't be there.


There's a difference between satisfying the immediate itch, and
proving to be a well-engineered solution to an underlying requirement.


That is true for W3C. It isn't true for people trying to make money this year
by selling tools & trying to use web sites for business reasons. I've spent a
lot of my life in exactly that debate with business managers & marketers,
although with large computer systems, and the business cycle doesn't work like
that. The trick is to satisfy immediate needs while *also* building assets
(eg. specs & infrastructure) for the future. In fact, I wrote a paper on this
sort of subject:

"A Process For Designing A Value Chain For A New Product"
http://www.barry.pearson.name/papers/value_chain/

The name refers to the need to collaborate for immediate benefit, while
keeping an eye on the changing relationships over a life cycle, while building
assets & competence. But there will be some ends-points that it was never
possible to get to, except in fantasies.

[snip] In fact, over a period of quite some years, as each new browser
version came out, it was evident that the vendor was trying to enlist
web authors to incorporate non-interworkable features into their web
sites, in the hope of locking-in the users to a specific browser.
The browser developers weren't by any means trying to market a fuly
WWW-compatible product to the end users - rather, they were marketing
their "product" (their vendor extensions) to the information
providers, while giving the browsers away free in order to capture the
eyeballs for the information providers.
That was probably perfect business sense. If management hadn't been doing
that, their investors should have thrown them out.

[snip] The only way that the W3C could have conceivably dealt with that kind
of marketing dirty-tricks would be to trademark the terms HTML and/or
WWW, and take court action against those who abused them. But the W3C
is a consortium of its members, and it's hardly likely to cut off its
own nose to spite its face like that.

[snip]

I suspect it was too late by the time W3C appeared. If you had said that about
CERN or whatever, then that may have worked. But then, I suspect that the
explosion of the web would have taken years longer. It was presumably the
possibility of getting lots of dollars/pounds that drove people to build
browsers, authoring tools, and publish web sites. Pre-competitive research
would probably have taken a lot longer. (DARPA appears to have been able to
drive the Internet standards forward, but perhaps that was because of the
Defence connection? CERN may not have had sufficient clout).

--
Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
http://www.ChildSupportAnalysis.co.uk/
Jul 20 '05 #82

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: sinister | last post by:
From http://www.oreillynet.com/pub/a/javascript/synd/2002/03/01/css_layout.html "Typically CSS layout replaces tables, which are currently the most popular method of placing page elements. There...
16
by: Axel Dahmen | last post by:
Hi, I always thought that "1em" equals one line height. But if I replace <br><hr><br> by <hr style="margin: 1em 0;">
3
by: Vithar | last post by:
I have a database that is being used as sort of a reports data warehouse. I use DTS packages to upload data from all the different sources. Right now I have it truncating the tables and appending...
3
by: MP | last post by:
context: vb6/ ado / .mdb format / jet 4.0 (not using Access - ADO only) - creating tables via ADO (don't have access) - all tables have a primary key (PK) - many of the PK will become FK(Foreign...
135
by: Xah Lee | last post by:
Tabs versus Spaces in Source Code Xah Lee, 2006-05-13 In coding a computer program, there's often the choices of tabs or spaces for code indentation. There is a large amount of confusion about...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.