Refresh show/tell

X-Rey -inlines the markup into the content.

Paparazzi - get it a url and it will capture the rendered output to a PNG.

Mercurial - code repository tool like SVN?

Soft-wrapping long words

The issue of long, non-breaking words like urls has been around for a while on the web - and the impact this can have on layouts and other places where width is constrained for whatever reason.

I’ve been going back and forth on this, and dug up and old test page on No-wrapping and Soft-wrapping. This has some test cases using <nobr>, <wbr> and the soft-hyphen character &shy;. The results aren’t too pretty:


Ignores &shy;,
supports <wbr>, but not when contained in <nobr>

Windows IE 6:

Wraps correctly with &shy;,

displays ‘-‘ only at a wrap-point.

Supports <wbr> solo, and when contained in <nobr>

Safari 2.0:

Wraps correctly with &shy;,

displays ‘-‘ only at a wrap-point.

Seems to only pay attention to <wbr> in the context of <nobr>, where there are spaces to wrap on.

Opera 9.0:

Wraps correctly with &shy;

Ignores <wbr> completely, supports <nobr>

(TODO: Need to add tests to the page for CSS scenarios like white-space: nowrap;)
(NOTE: Yes, both nobr and wbr have been deprecated. Unfortunately unless I missed i, there’s no good replacements in CSS for wbr, or in xhtml for either.)

Meanwhile, I took a stab at a javascript-y solution. This is an implementation that looks to insert a wrap-point in a character string to enforce a (given) maxiumum text columns length. So, if you want to make sure that all words wrap at or before 24 characters, it will insert a soft-hyphen (or <wbr/> for mozilla/firefox). It tries (a little) to favor wrapping after punctuation, and be somewhat smart about recognizing words (e.g by recognizing escape entities, markup).

Anyhow, here’s a Softwrap test page. It could definately suffer some optimization, and I suspect could be vastly simplified with some better regexp. But its working ok for now.

Re: Has accessibility been taken too far?

(Jeff Croft posted this provocative article which seemed to tap a common feeling that accessibility is a pain in the ass, strictly optional and web designers should be cut some slack)

If you wade through the slop of the first round of comments to this post, there’s actually some reasonable debate that follows. Jeff came out saying he wanted to provoke discussion and (eventually) seems to have done so. It would be nice to think he was playing devil’s advocate, but that’s probably too generous. He does however seem to withdraw some of the more provocative statements and wind up saying you do what you can, when you can, and dont beat yourself up too badly about it.

It seems to me that accessibility is being treated as one big lump you have to swallow. Particularly in the article heading “Has accessibility been taken too far”. What does that even mean? From where I sit, in practice it hasnt actually budged much in the last 5 years, though an awareness of what you could do if you cared to might have improved.

There are some aspects of making an accessible design that present real difficulties to a designer, and some that do not. Making layout and content scale and flow sensibly across a useful range of font-size and effective window width is tricky. Making complex forms accessible can also add significant time to a project. But using semantic markup, and good page structure are not really hard at all.

So it seems there might be a useful distinction to be made between “not-meaningless” design, and “accessible” design. Where the former just implies the application of common sense and basic good practices, and the latter actually includes specific accommodations for some particular minority group/environment/technology.

Some is better than none. And if the brow-beating that Jeff refers to is real, it might be counter-productive. What is critical is awareness. And his post and lots of the comments that follow demonstrate some big gaps. There’s a difference between a site looking/sounding crummy and being actually broken. Designers, developers and content authors need a better understanding of the impact of their decisions. Creating valid xhtml is useful, but not critical to accessibility. Css layout too - nice to have. Good alt tags? Only strictly /necessary/ in some cases, though without them it might be confusing and exasperating.

On a typical web project, that a site launches at all is usually a major accomplishment. You have to keep it simple, dont sweat the small stuff, and so on to get there. Accessibilty competes with a host of other requirements for attention. Here’s my scale of 0:10 for accessibility:
0: not published at all, in any format. You just had to be there
5: published widely in an available format: perhaps a magazine, or a completely inaccessible web format like .gif or a downloadable wordperfect document. At least I might hear about it, and get someone to help me read it
8: Published in semantic, sensible html. But there’s no alt tags, and no form field labels
10: All the above, plus all our favorite shortcuts and conventions that make quickly grokking the content a breeze in every conceivable browser, screen-reader, device and context.

If you are a brow-beater (and I’d guess anyone with an interest in accessibility has been guilty at some point), this might be a good perspective to keep.

Stepping back a little, awareness of accessibility on the web does seem to have grown to the point that it is one of the criteria I hear being used when assessing quality. And this might be a simpler way to think of it. If a site blows up in IE 5, is mute or unintelligible in Jaws, invisible to googlebot and strains the eyes on a projector - maybe its just plain bad. When “Good” includes being accessible, and inaccessible is “Bad”, I think accessibility on the web has finally arrived.

"Surveying OS Ajax Toolkits" article on infoworld

This is well worth a read. Unlike most reviews I’ve seen, this author
obviously spent sometime with each of the libraries he includes -
enough to get a meaningful impression of the strengths and weaknesses.
For me, he’s right on the money with Dojo, YUI, Rico, Atlas. I differ
a little on GWT, but in truth it sounds like he spent more time with
it than I did.
Being a dojo guy at present, I think there’s lots in there he missed
or didnt mention, but the overviews are fair IMO.
Oddly, he didnt review Prototype/Scriptaculous at all.

Krugle - open source code search engine

I bumped into one of Krugle’s developers at the Ajax experience conference. Looks like they just came out of beta and are open to the public. This is sweet, I can’t emphasize enough how useful this is already proving. 90% of all code (I reckon) is boiler-plate, but by the time you’ve tracked down an implementation (and possibly ported it to your language of choice) its easier (say, 75% of the time) to just code it up yourself. Krugle changes that equation. This is going to be fun.

Gripes - the frames and use of javascript: links make it difficult going on impossible to bookmark pages, blow out new tabs etc. Time to fire up Greasemonkey and fix it.

Javascript conflicts and portlet namespaces

First - javascript doesn’t actually have “namespaces”. But the idea is there - unless functions, objects and variables are designated otherwise, they exist in the global scope - properties of the window object. In a portal - where portlets might want to include script libraries to facilitate interaction within the portlet - there’s a risk of conflicts with objects using the same name being redefined by portlet-imported code. There is no “portlet scope” - it would have to be artificially defined. This is doable for instance variables, but more problematic for libraries.

Joe Walker (author of DWR - a remoting framework for javascript/java) blogs about one face of this problem here - the $ function

Conflict can exist between code in/required by different portlets on a page, or portlet code and the portal wrapper (style). The $ function is just one example of how code can compete. The other issue is that as javascript is a dynamic, prototype-based language, its quite possible for a script to change the same features of the language mid-stream:

Function.prototype.bind = function(someObjectToBindTo) {
// .. do stuff.

This is increasingly common as library authors seek to give developers a familiar environment to code in and add syntactical sugar for common tasks. The problem is that method signatures and return values may differ, and functionality may differ so redefinition is a real issue. This is where I think there’s a need for guidelines and accepted best practice. Defensive coding will get you around most of it.. but not all.

Finally, there’s the even thornier issue of 2 portlets using different versions of the same library. Even if the library was coded to be backward compatible, it would require a way to ensure the earlier version was not included last.

New site theme

Never sure if this is “work” or play. But I got tired of the pond weed look on this site and refreshed it. I mostly just switched a couple colors and a few graphics to make “Lentil 1.1”.

Maybe I’ll even do the blog templates this time? nah… I’m sitting on that while I ponder migrating the blog - which will make me address the more difficult question of what navigation around a blog and its archives should actually look like.

Playing nice with others in javascript

Andrew Dupont has written a very interesting article on Prototype, and the recent $() extensions that allow things like $(someelement).hide() and so on.

This is actually a really nice solution. It’s syntactical sugar without actually extending the element itself. Andrew argues that as an object oriented language, its reasonable to want to be able to extend objects like HTMLElement in javascript. I know where he’s coming from, but out in the wild - in a world where my code has to co-exist with that of other authors (co-workers, customers, and potentially end-users via GreaseMonkey and similar) these core objects and data-types are a shared property and not to be taking liberties with.

A portal is the extreme case, where the portal framework (“style”) has its client-side scripts, and included portlets can have their scripts. Portlets might be by the same author or vendor as the portal, or not. The potential for collision and overlapping is huge. This is made worse when you consider that portlets using the same libraries might co-exist on the same page; do we download Prototype.js twice? What if the portlet code was developed against a different version of the same library? And finally, to make a difficult problem basically impossible, its theoretically possible (I gather) to place the exact same portlet (id and all) twice on the same page.

Much of this is just hypothetical. In practice the portal owner has to take some responsibility for what goes on a page, and should enforce some basic conditions like requiring portlets to not tromp around in the window object and global namespace (and not mess with the fundamental data types that other scripts have to share). And in truth - how much scripting is really necessary on these kinds of portal pages? Most functionality will be a click away when the user actually selects a link from the portal and goes there. But there are some interesting use cases where you’d legitimately want to bubble up richer interactions to the aggregated portal page: how about client-side form validation, tooltips, context-menus, productivity (e.g. select all/none) controls.

The Dojo Toolkit goes some way to addressing these issues. It minimizes its footprint in the global namespace - with just the dojo object itself, and a djConfig object. It also has checks in place to safeguard against the unexpected properties of objects that can show up when the core data type object prototypes have been extended (such as Array, Object, Function).

Note, none of this is new or unique to Ajax and the wave of more responsive UI we’re seeing recently. The issue existed long before in even the simplest client-side scripting, as well as CSS (most pronouncedly when a stylesheet defines styles directly on an element such as P, TD, UL etc.) The advent of richer browser-based UI does quickly bring the problem to the fore though…

WCAG 2.0 (as compared to section 508)

I’ve been really impressed with the WCAG 2.0 guidelines. This is a big improvement in the way the guidelines are presented and worded that I think will make adoption much much more likely. This here page answers the inevitable question: how does WCAG 2.0 line up against section 508? On one hand it adds specifics and details: sucess criteria and techniques for achieving success. On the other WCAG 2.0 doesnt assume the use of HTML to present web-based content - so some detail that is explicit in section 508 has been moved to other sections within the document that provide technology-specific details.

Accessible maps at ALA

This is a nice write-up of making a point-map (a map with information relating to points on that map) in a semantic and accessible manner. I read a lot of articles, and rarely feel compelled to blog them. I was impressed with this one though. It doesnt shy from diggging right into the details, and does an admirable job of working through some complexity to present a real, viable solution. So many introduce a single technique and leave “the rest” as an exercise to the reader.

The author also using a definition list, which is one of my favorite constructs. My only hesitation is that DTs dont feature in the Heading heirarchy, and so as semantically appropriate as they are, you lose that implied structure in most UAs.