From: Xah Lee
Subject: HTML Correctness and Validators
Date: 
Message-ID: <2fb289be-00b3-440a-b153-ca88f0ba16c5@d42g2000prb.googlegroups.com>
recently i wrote a blog essay about html correctness and html
validators, with relations to the programing lang communities. I hope
programing lang fans will take more consideration on the correctness
of the doc they produces.

HTML Correctness and Validators
• http://xahlee.org/js/html_correctness.html

plain text version follows.
---------------------------

HTML Correctness and Validators

Xah Lee, 2008-12-28

Some notes about html correctness and html validator.

Condition Of Website Correctness

My website “xahlee.org” has close to 4000 html files. All are valid
html files. “Valid” here means passing the w3c's validator at
http://validator.w3.org/. Being a programing and correctness nerd,
correct html is important to me. (correct markup has important,
practical, benefits, such as machine parsing and transformation, as
picked up by the XML movement. Ultimately, it is a foundation of
semantic web↗.)

In programing lang communities, the programer tech geekers are
fanatical about their fav lang's superiority, and in the case of
functional langs, they are often proud of their correctness features.
However, a look at their official docs or websites, they are ALL
invalid html, with errors in just about every 10 lines of html source
code. It is fucking ridiculous.

<lj-cut>

In the web development geeker communities, you can see how they are
tight-assed about correct use of HTML/CSS, etc, where there are often
frequent and heated debates about propriety of semantic markup, and
they don't hesitate to ridicule Microsoft Internet Explorer browser,
or the average HTML content producer. However, a look at the html they
produced, also almost none are valid.

The bad html also happens in vast majority of docs produced by
organization of standards, such as the Unicode Consortium↗, IETF↗. For
example, if you run w3c's validator on their IETF's home page, there
are 32 errors, including “no doctype found”, and if you validate
unicode's http://www.unicode.org/faq/utf_bom.html, there are 2 errors.
(few years ago, they are much worse. I don't think even “w3.org”'s
pages are valid back then.)

In about 2006, i spent few hours research on what major websites
produces valid html. To this date, I know of only one major site that
produces valid html, and that is Wikipedia. This is fantastic.
Wikipedia is produced by MediaWiki↗ engine, written in PHP. Many other
wiki sites also run MediaWiki, so they undoubtfully are also valid. As
far as i know, few other wiki or forum software also produces valid
html, though they are more the exceptions than norm. (did try to check
7 random pages from “w3.org”, looks like they are all valid today.)
Personal Need For Validator

My personal need is to validate typically hundreds of files on my
local drive. Every month or so, i do systematic regex find-replace
operation on a dir. This often results over a hundred changed files.
Every now and then, i improve my css or html markup semantics site
wide, so the find-replace is on all 4000 files. Usually the find-
replace is carefully crafted with attention to correctenss, or done in
emacs interactively, so possible regex screwup is minimal, but still i
wish to validate by batch after the operation.

Batch validation is useful because, if i screwed up in my regex,
usually it ends up with badly formed html, so html validation can
catch the result.

In 2008, i converted most my sites from html 4 transitional to html 4
strict. The process is quite a manual pain, even the files i start
with are valid.

Here are some examples. In html4strict:

    * “‹br›” must be inside block level tags.  Image tag “‹img ...›”
needs to be enclosed in a block level tag such as “‹div›”.  Content
inside blockquote must be wrapped with a block level tag. e.g.
“‹blockquote›Time Flies‹/blockquote›” would be invalid in html4strict;
you must have “‹blockquote›‹p›Time Flies‹/p›‹/blockquote›”

Lets look at the image tag example. You might think it is trivial to
transform because you can simply use regex to wrap a “‹div›” to image
tags. However, it's not that simple. Because, for example, often i
have this form:

‹img src="pretty.jpg" alt="pretty girl" width="565" height="809"›
‹p›above: A pretty girl.‹/p›

The “p” tag immediately below a “img” tag, functions as the image's
caption. I have css setup so that this caption has no gap to the image
above it, like this:

img + p {margin-top:0px;width:100%} /* img caption */

I have the “width:100%” because normally “p” has “width:80ex” for
normal paragraph.

Now, if i simply wrap a “div” tag to all my “img” tags, i will end up
with this form:

‹div›‹img src="pretty.jpg" alt="pretty girl" width="565"
height="809"›‹/div› ‹p›above: A pretty girl.‹/p›

Now this screws up with my caption css, and there's no way to match
“p” that comes after a “div › img”.

Also, sometimes i have a sequence of images. Wrapping each with a
“div” would introduce gaps between them.

This is just a simplified example. In short, converting from
html4transitional to html4strict while hoping to retain appearance or
markup semantics in practical ways is pretty much a manual pain. (the
ultimate reason is because html4transitional is far from being a good
semantic markup. (html4strict is a bit better)) Validators

In my work i need a batch validator. What i want is a command line
utility, that can batch validate all files in a dir. Here are some
solutions related to html validation.

    * The standard validator service by w3c: http://validator.w3.org/
(see also: W3C Markup Validation Service↗ ). The problem with this is
that it can't validate local files, and can't do in batch. Using it to
validate 4000 files thru network (with a help of perl script) would
not be acceptable, since each job means massive web traffic. (my site
is near 754 Mebibyte↗.)

    * FireFox has a “Html Validator” add-on by Marc Gueury.
https://addons.mozilla.org/en-US/firefox/addon/249. This is based on
the same code as w3c validator, can work on local files, is extremely
fast. When browsing any page, it shows a green check mark on the
window corner when the file is valid. I heavily rely on this tool.

    * FireFox has a “Web Developer” add-on by Chris Pederick.
https://addons.mozilla.org/en-US/firefox/addon/60 Since Firefox “v.3”,
it has a icon that indicates if a page's css and javascript are
invalid (has errors), and also indicates whether the file is using
Quirks mode↗. I heavily rely on this tool.

I heavily relie on the above 2 FireFox tools. However, the FireFox
tools do not let me do batch validation. Over the years i've searched
for batch validation tools. Here's some list:

    * HTML Tidy↗ A batch tool primarily for cleanup html markup. I
didn't find it useful for batch validation purposes, nor for html
conversion jobs. It doesn't do well for my html conversion needs
because the tool is incapable of retaining your html formatting (i.e.
retain your newlines locations). I do a lot regex based text procesing
on my html files, so i need assumptions about how newlines are in my
html files. If i use tidy on my site, that means i have to abandon
regex based text processing, and instead, have to treat my files using
html and dom parsers, which makes most practical text processing needs
quite more complex and cumbersome.

    * A perl module “HTML::Lint”, at http://search.cpan.org/~petdance/HTML-Lint-2.06/lib/HTML/Lint.pm.
Seems pretty much like HTML Tidy.

    * http://htmlhelp.com/tools/validator/offline/index.html.en is
another validation tool. I haven't looked into yet. Their doc about
differences to other validator: http://htmlhelp.com/tools/validator/differences.html.en,
is quite interesting, and seems a advantage for my needs.

    * OpenJade and OpenSP. http://openjade.sourceforge.net/ Seems a
good tool. Haven't looked into.

    * Emacs's nxml mode http://www.thaiopensource.com/nxml-mode/, by
the xml expert James Clark↗. This is written in elisp with over 10
thousand lines of code. It indicates whether your xml file is valid as
you type. This package is very well received, reputed to make emacs
the best xml editor. This is fantastic, but since my files are
currently html not xhtml, so i haven't used this much. There are emacs
html modes based on this package, called nxhtml mode, but the code is
still pretty alpha and i find it having a lot problems.

One semi solution for batch validation i found is: “Validator S.A.C.”,
at http://habilis.net/validator-sac/. It is basically w3c's validator
compiled for OS X with a GUI interface. However, this is not designed
for batch operation. If you want to do batch, i run it like this: “/
Applications/Validator-SAC.app/Contents/Resources/weblet ‹html file
path›”. However, it output a whole report in html on the validation
result (same as the page you see in w3c validation). This is not what
i want. What i want is simply for it to tell me if a file is valid or
not. For any error detail, i can simply load the page in FireFox
myself, since if i need to edit it i need to view it in FireFox
anyway. So, to fix this problem, you can wrap a perl script, which
takes a dir and simply print any file path that's invalid.

Here's the perl script:

# perl

# 2008-06-20 validates a given dir's html files recursively requires
the mac os x app Validator-SAC.app at http://habilis.net/validator-sac/
as of 2008-06

use strict; use File::Find;

my $dirPath = q(/Users/xah/web/emacs); my $validator = q(/Applications/
Validator-SAC.app/Contents/Resources/weblet);

sub wanted { if ($_ =~ m{\.html$} && not -d $File::Find::name) {

    my $output = qx{$validator "$File::Find::name" | head -n 11 | grep
'X-W3C-Validator-Status:'};

    if ($output ne qq(X-W3C-Validator-Status: Valid\n)) { print q
(Problem: ), $File::Find::name, "\n";

    } else {

      print qq(Good: $_) ,"\n";

    }

  }

}

find(\&wanted, $dirPath);

print q(Done.)

However, for some reason, “Validator S.A.C.” took nearly 2 seconds to
check each file, in contrast, the FireFox html validator add-on took a
fraction of a second while also render the whole page completely. For
example, suppose i have 20 files in a dir i need to validate. It's
faster, if i just open all of them in FireFox and eyeball the validity
indicator, then running the “Validator SAC” on them.

I wrote to its author Chuck Houpt about this. It seems that the
validator uses Perl and loads about 20 heavy duty web related perl
modules to do its job, and over all is wrapped as a Common Gateway
Interface↗. Perhaps there is a way to avoid these wrappers and call
the parser or validator directly.

I'm still looking for a fast, batch, html validation tool.

-----------------

  Xah
∑ http://xahlee.org/

☄

From: Aaron Gray
Subject: Re: HTML Correctness and Validators
Date: 
Message-ID: <6rsbahF33ndvU1@mid.individual.net>
"Xah Lee" <······@gmail.com> wrote in message 
·········································@d42g2000prb.googlegroups.com...
>recently i wrote a blog essay about html correctness and html
>validators, with relations to the programing lang communities. I hope
>programing lang fans will take more consideration on the correctness
>of the doc they produces.
>
>HTML Correctness and Validators
>. http://xahlee.org/js/html_correctness.html

Do you enjoy spamming comp.lang.functional with OT cross-posts ?

Regards,

Aaron
From: Lew
Subject: Re: HTML Correctness and Validators
Date: 
Message-ID: <9eba3125-07e7-42ff-98c0-0b9be6105315@r36g2000prf.googlegroups.com>
Xah Lee wrote...
>> recently [sic] i [sic] wrote a blog essay about html [sic] correctness and html [sic]
>> validators, with relations [sic] to the programing [sic] lang [sic] communities. I hope
>> programing [sic] lang [sic] fans will take more consideration on [sic] the correctness
>> of the doc [sic] they produces [sic].

"Aaron Gray" wrote:
> Do you enjoy spamming comp.lang.functional with OT cross-posts ?

Is that a rhetorical question?

--
Lew
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: HTML Correctness and Validators
Date: 
Message-ID: <REM-2009jan03-002@Yahoo.Com>
> From: Xah Lee <······@gmail.com>
> Condition Of Website Correctness
> My website =E2=80=9Cxahlee.org=E2=80=9D has close to 4000 html
> files. All a= re valid html files. =E2=80=9CValid=E2=80=9D here
> means passing the w3c's validator = at http://validator.w3.org/.

Congratulations on somehow generating so much valid HTML and for
taking the pain to check it to make sure it's correct.

> To this date, I know of only one major site that produces valid
> html, and that is Wikipedia.

And it annoys you that most *other* Web sites are crappy by
comparison? IMO the solution is for you to keep track of which Web
sites are crappy, and provide some useful service on *your* Web
site which others want to use, and to deny access to anyone
connecting from any place that hosts lots of crappy HTML. When they
try to connect to your service, you deliver instead a critique of
one of the worst of their Web pages. Your Web site refuses to
provide them the service they wanted until they clean up their Web
site.

> Every month or so, i do systematic regex find-replace operation
> on a dir. This often results over a hundred changed files. Every
> now and then, i improve my css or html markup semantics site wide,
> so the find-replace is on all 4000 files. Usually the find- replace
> is carefully crafted with attention to correctenss, or done in
> emacs interactively, so possible regex screwup is minimal, but
> still i wish to validate by batch after the operation.

IMO the cure for your distress is *not* to generate HTML directly,
and *not* to perform brute-force regex editing of your HTML
directly, but rather to generate *all* your HTML from Lisp
structures such as: (:HTML (:HEAD ...) (:BODY ...))
Whatever regex changes (or !!better!!, structured changes) you make
to your Lisp structures, will either trigger a diagnostic in the
conversion to HTML, or produce valid HTML. Furthermore, deep
analysis is much easier to perform with Lisp data structures than
with HTML text. (Of course you can always keep source in SGML or
XML form and *parse* it to create Lisp data structurs, and analyze
those. It just seems easier to me if you keep the source in
s-expression form rather than SGML or XML.) In some cases you don't
want the source in *any* form like that at all, rather keep the
true source in RDBS tables and generate everything from that data.
In that case Lisp data is an intermediate form between RDBS tables
and HTML.

Batch validation is useful because, if i screwed up in my regex,
usually it ends up with badly formed html, so html validation can
catch the result.

> In 2008, i[I] converted most [of] my sites from html 4 transitional
> to html 4 strict. The process is quite a manual pain, even [if]
> the files i[I] start with are valid.

If the true source were Lisp data, it would be easier to change the
convertor to make 4-strict output instead of 4-transitional output.

[huge snip]

> I'm still looking for a fast, batch, html validation tool.

IMO you're trying to solve the wrong problem. It's a lot easier to
automatically generate syntactically correct data than to parse and
validate given input data to determine whether it's syntactically
correct. Statistical sampling and/or all-branch-cases test suite is
sufficient to verify that the generator is correct (producing
correct output).