(If you must know immediately why this post is happening now rather than a couple years ago, see the last paragraph of this post.)
In September 2008 I wrote a web tech blog post about Text.wholeText
and Text.replaceWholeText
. These are two DOM APIs which I implemented in Gecko before I graduated from MIT and took five months to thru-hike the Appalachian Trail. Implementing whole-text functionality was an interesting little bit of hacking, done in an attempt to pick up as many easy Acid3 points as possible for Firefox 3, with as little effort as possible. The functionality didn’t quite make 3.0, but aside from the missed point I think that mattered little.
The careful reader might think the post contains a slight derision for Text.wholeText
and Text.replaceWholeText
— and he would be right to think so. As I note in the last paragraph of the post, Node.textContent
(or in the real world of the web, innerHTML
) is generally better-suited for what you might use Text.wholeText
to implement. In those situations where it isn’t, direct DOM manipulation is usually much clearer.
The whole-text approach of Text.wholeText
and Text.replaceWholeText
is arcane. Its relative usefulness is an artifact of the weird way content is broken up into a DOM that can contain multiple adjacent text nodes, in which node references persist across mutations. It is an approach motivated by fundamental design flaws in the DOM: Text.wholeText
and Text.replaceWholeText
are a patch, not new functionality. Further, Text.replaceWholeText
‘s semantics are complicated, so it’s not particularly easy to use it to good effect. (Note the rather contorted example I gave in the post.)
Fundamentally, the only reason I implemented whole-text functionality is because it was in Acid3. I believe this is the only reason WebKit implemented it, and I believe it is quite probably the only reason other browser engines have implemented it. This is the wrong way to determine what features to implement. Features should be implemented on the basis of their usefulness, of their “aesthetics” (an example lacking such: shared-state threads with manual locks, rather than shared-nothing worker threads with message passing), of their ability to make web development easier, and of what they make possible that had previously been impossible (or practically so). I know of no browser engine that implemented whole-text functionality because web developers demanded it. Nevertheless, its being in a well-known test mandated its implementation; in an arms race, cost-benefit analysis must be discarded. (The one bright spot for Mozilla: in contrast to at least some of their competitors, they didn’t have to spend money, or divert an employee, contractor, or intern already more productively occupied, to implement this — beyond review time and marginal overhead, at least.)
The requirement of whole-text functionality, despite its non-importance, is one example of what I think makes Acid3 a flawed test. Acid3 went out of its way to test edge cases. Worse, it tested edge cases where differences posed little cost for web developers. Acid3 often didn’t test things web authors wanted, but instead it tested things that were broken or not implemented regardless whether anyone truly cared.
The other Acid3 bugs I fixed were generally just as unimportant as whole-text functionality. (Due to time constraints of classes and graduation, this correlation shouldn’t be very surprising, of course, but each trivial test was a missed opportunity to include something developers would care about.) Those bugs were:
- A bug in UTF-16 processing
cursor: none
, fixing a test to ensure all CSS 3 cursor keywords were recognized- Errors thrown when parsing names and namespaces for programmatically-created elements
- A bug in
Element.attributes.removeNamedItemNS
- Some bugs in how we handled omitted versus explicitly
undefined
arguments to some JavaScript number formatting methods - A mistake in parsing escapes in JavaScript programs
The UTF-16 bug was exactly the sort of thing to test, especially for its potential security implications; disagreement here is frankly dangerous. (Still, I remain concerned that third-party specification inexactness caused Acid3 to permit several different semantics, listed beneath “it would be permitted to do any of the following” in Acid3‘s source. This concern will be addressed in WebIDL, among other places, in the future.) cursor:none
was an arguably reasonable test, but it probably wasn’t important to web developers because it had a trivial workaround: use a transparent image. (The same goes for other unrecognized keywords, if with less fidelity to the user’s browser conventions, therefore lending the testing of these keywords greater reasonableness.) But the other tests are careful spec-lawyering rather than reflections of web author needs. (This is not to say that spec-lawyering is not worthwhile — I enjoy spec-lawyering immensely — but the real-world impact of some non-compliance, such as the toString
example noted below, is vanishingly small.) Nitpicking the exact exceptions thrown trying to create elements with patently malformed names doesn’t really matter, because in a world of HTML almost no one creates elements with novel names. (Even in the world of XML languages, element names are confined to the vocabulary of namespaces.) Effectively no one uses Element.attributes
, and the removeNamedItemNS
method of it even less, preferring instead {has,get,set}Attribute{,NS}
. The bug in question — that null
was returned rather than an exception being thrown for non-existent attributes — was basic spec compliance but ultimately not useful function for web developers. Similarly, the impact of an incorrect difference between (3.14).toString()
and (3.14).toString(undefined)
is nearly negligible. The escape-parsing bug was an interesting quirk, but since other browsers produced a syntax error it had little relevance for developers. All these issues were worth fixing, but should they have been in Acid3? How many developers salivated in anticipation of the time when eval("var v\\u0020 = 1;")
would properly throw a syntax error?
Other Acid3-tested features fixed by others often demonstrated similar unconcern for real-world web authoring needs. (NB: I do not mean to criticize the authors or suggesters of mentioned tests [I’m actually in the latter set, having failed to make these opinions clear at the time]; their tests are generally valid and worth fixing. I only suggest that their tests lacked sufficient real-world importance to merit inclusion in Acid3.) One test examined support for getSVGDocument()
, a rather ill-advised method on frames and objects added by the SVG specification, whose return value, it was eventually determined (after Acid3-spawned discussion), would be identical to the sibling contentDocument
property. Another examined the values of various properties of DocumentType
nodes in the DOM, notwithstanding that web developers use document types — at source level only, not programmatically — almost exclusively for the purpose of placing browser engines in standards mode. Not all tested features were unimportant; one clear counterexample in Acid3, TTF downloadable font support, was well worth including. But if Acid3 gave web authors that, why test SVG font support? (Dynamically-modifiable fonts don’t count: they’re far beyond the bounds of what web authors might use regularly.) SVG font use through CSS was an after-the-fact rationalization: SVG fonts were only intended for use in SVG. (If one wanted to write an acid test specifically for SVG renderers, testing SVG font support at the same time might be sensible. Acid3, despite its inclusion of a few SVG tests, was certainly not such a test.)
But Acid tests don’t have to test trivialities! Indeed, past Acid tests usefully prodded browsers to implement functionality web developers craved. I can’t speak to the original as it was way before my time, but Acid2 did not have these shortcomings. The features Acid2 tested were in demand among web authors before the existence of Acid2, a fortiori desirable independent of their presence in Acid2.
I have hope Acid4 will not have these shortcomings. This is partly because the test’s author recognizes past errors as such. With the advent of HTML5 and a barrel of new standards efforts (workers, WebGL, XMLHttpRequest, CSS animations and transitions, &c. to name a few that randomly come to mind), there should be plenty of useful functionality to test in future Acid tests without needing to draw from the dregs. Still, we’ll have to wait and see what the future brings.
(A note on the timing of this post: it was originally to be a part of my ongoing Appalachian Trail thru-hike posts, because I wrote the web tech blog post on whole-text functionality during the hike. However, at the request of a few people I’ve separated it out into this post to make it more readable and accessible. [This post would have been in the next trail update, to be posted within a week.] This post would indisputably have been far more timely awhile ago, but I write only as I have time. [I wouldn’t even have bothered to post given the delay, but I have a certain amount of stubbornness about finishing up the A.T. post series. Since in my mind this belongs in that narrative, and as I’ve never omitted a memorable topic even if (if? —ed.) it interested no one but me, I feel obliged to address this even this far after the fact.] Now, if you skipped this post’s contents for this explanation, return to the start and read on.)
Although you’re right in saying
I do think there’s a kink in your thinking when you say
as whether or not features have a trivial workaround, it’s still a bug and it’s still annoying. Also, the workaround would mean extra images to keep track of and extra http requests.
CSS is there to save us from transparent gifs!
That said, I’ve mailed Hixie with questions about adding in some @font-face spec tests; browsers still aren’t handling them properly in my view.
Comment by James John Malcolm — 18.03.10 @ 15:26
Yes, it was a bug and it was annoying. But I think you’ve missed my point: features tested in Acid3 should have provided functionality not easily replicated in other ways. A single transparent image constitutes an easy way to replicate; the bandwidth and request demands are not onerous. (You could avoid them for small images using
data:
URLs, too.) The big hurdle ofcursor:
is being able to customize the cursor with a user-specified image: once that’s in place, keywords are nice-to-have but not worth placement in a high-profile test. Things like this belong in general test suites that go for broad coverage of specialized areas, intended to test for exhaustiveness — not in tests that have the reach and prestige to directly influence browser engine developers’ implementation (and order-of-implementation) decisions. Why waste effort on something developers could do in a slightly different way with very little extra effort?Acid3 is essentially frozen (modulo standards-body uncertainties, see the source for details), so appealing for further
@font-face
tests won’t result in any changes. It might work for Acid4, but Acid3 is done.Comment by Jeff — 18.03.10 @ 23:17
Smart and good writing, albeit big ego.
Comment by namename — 19.03.10 @ 02:25
That’s what I meant to write, Acid 4. Acid 3’s been done for a long time.
Because stuff should work properly and predictably. When it doesn’t it’s a pain in the ass for designers.
I don’t think high-profile tests should only test for new, not easily replicated, functionality in a non-specific way. New non-complete features are being added to engines all the time anyway, without tests driving them. Having a high-profile test make sure things really work the way they should is very valuable in my opinion.
Acid 2 btw, included many parsing ‘trivialities’ like /* comments with backslashes \*/.
Comment by James John Malcolm — 19.03.10 @ 02:46
I didn’t mean to suggest subtests in high-profile Acid tests should be non-specific: rather, only that the general target of each subtest should be something where browsers disagree widely (thus making the feature impossible to use) due to mis-implementation or non-implementation. When writing the precise subtests it’s perfectly fine (great, even) to go for the corner cases, edge cases, and inter-feature interaction points. So, for example, rather than just test that generated content “shows up”, you test its layout in interaction with nearby floating content, or with column breaks, and so on. You should still hit the general case while you’re at it — but try not to spend more than one situation in the group of subtests with that as the primary goal.
Yes, one missing hard corner case is annoying. However, I would rather have, say, new and well-tested CSS media query support (Acid3 did an excellent job on this point) than mindless testing that the laundry list of cursor keywords all work, once the threshold of arbitrary custom cursors is supported.
I’m willing to grant Acid2 leeway on some trivialities because in broad strokes what it tested was desirable and not implemented (or was not interoperably implemented).
Comment by Jeff — 19.03.10 @ 03:25
Thanks for the compliment. I put more effort (more importantly, better effort) into polishing this post than I have into others; I think it has specific areas where quality is quantifiably better than in past posts.
I’m curious why specifically you suggest “big ego”. Aside from it being my own opinion and actions and therefore being an expression of egoism (or perhaps egotism? I find the precise distinction elusory), I’m not sure how it is notably egocentric.
Comment by Jeff — 19.03.10 @ 03:41
Re: Namename’s comment: I think it’s a spam comment.
Re: Acid:
Seems like we generally want the same thing, although I still think there are cases where it’s valuable to implement several sub tests to test a number of corner cases to make sure that browsers implement a feature completely and properly.
The main crux of my argument for this is that you never know when an apparent edge case can be used as a terrific solution.
Imagine if margin(-right/left):auto hadn’t been implemented interoperably (which isn’t a word, but you know what I mean)!
Comment by James John Malcolm — 22.03.10 @ 02:11
For some people their ego is like a visible cloud following them around, all their actions and thoughts and words are self indulgent and self-affirming.
I meant your writing has this feel of trying very hard to communicate how smart you are, in addition to the original actual content or intent (Acid-DOM-etc).
I just skimmed through it again though, I’m probably wrong. Just ignore these sniping petty comments.
Comment by namename — 22.03.10 @ 11:01
I thought it was spam on first read, but as it didn’t link anywhere, and as it included an email address that looked like a personal email address (first initial, last name, that sort of formulation), I decided otherwise at the end.
I think in many cases you can know when an edge case can and can’t be used as a terrific solution: if the edge case can be duplicated another way with little effort (transparent cursor), if the process for invoking it is baroque (
num.toString(undefined)
), at the least, it’s not a terrific solution. It may be marginally better for aesthetic reasons, but aesthetics don’t put bread on the table like something new, properly implemented, can.Comment by Jeff — 22.03.10 @ 13:20
I wasn’t intending to communicate such, although a few times I did use less common words when I had the opportunity to use them, for the sheer joy of doing so. Buckley’s closing argument when discussing his use of intelligent vocabulary resonates with me. Not every argument must always use the plainest words possible, particularly not when the stakes are low as here. If I fail to convince a reader here due to somewhat-esoteric word choice, the worst of the consequences is that someone disagrees with me. For this argument, I can live with that.
Still, I’m happy to find that, on reread, you think it may have just been a first-time mis-impression. If you do happen to see anything else in the future, feel free to say. It should be possible to flex a vocabulary without putting off too many people if one takes a little care. 🙂
Comment by Jeff — 22.03.10 @ 13:31
[…] sistemática. Es decir, no refleja el uso real de un navegador. Es interesante leer al respecto una entrada del blog de Jeff Walden (de Mozilla) en el que confiesa que se agregaron partes al motor de render de Firefox simplemente […]
Pingback by Sobre Internet Explorer 9, estándares y pruebas varias | MuyWindows — 22.03.10 @ 17:25
“curser:none” may itself become a workaround (or part of an ingenious solution) for another problem…that’s my point.
Comment by James John Malcolm — 23.03.10 @ 05:05