Tuesday, August 31, 2004

I read Scott Rosenberg's post today about extraordinary attempts to avoid appearance of journalistic bias, then got to discuss it with him today too. I noted that the problem seems to be a bit worse in the US. In Canada and the UK, I think newspapers may be a little happier to be known to have a bias. Not the Globe and Mail, perhaps, which takes a high-horse approach to politics and society despite clearly having biases at times, but at least the Sun. And in the UK I've heard papers are more likely to take sides and I know the Economist sometimes bluntly admits its bias. In the US, it seems, papers go to greater lengths (including forbidding attendance at benefit concerts) to avoid being seen as a mouthpiece for either the Democrats or the Replublicans.

One theory for the extra attempts to appear neutral: since there's only two real US parties, any admitted bias is tantamount to admitting that the paper favours one of them. In Canada politics may not be quite so partisan or polarized, or at least haven't been for so long. A Canadian paper with a slight conservative bent could be a PC mouthpiece, or a Reform mouthpiece, or neither, so at least there's confusion about which party it's supposed to be the mouthpiece for. But I'm not sure this theory holds up because certainly Canadian politics can be partisan (and nasty), and UK politics may be polarized much like American.

Another theory is that since Americans are generally so quick to come up with conspiracy theories, Americans are therefore quick to assume that politicians are somehow controlling journalistic output. Therefore a paper must appear especially untainted to avoid being written off as government-controlled propaganda.

I'm not terribly happy with either of these theories, and maybe I'm wrong that this is uniquely American, so feel free to chime in.

Monday, August 23, 2004

Wikis suck for specs

At OSAF, unfortunately, we don't have very good specs. We have great designs, but it's really hard to go from the designs to know what exactly works and doesn't work, what's implemented and what isn't, and how the non-GUI choices were made, at any given release. It's hard for a developer to pick a feature and see how to implement it, because the developer could either see too much (the wiki pages or fragments that show the bigger context for the design, the aspects of the feature that we can't do today) or too little (the wiki pages or fragments that seemed irrelevant but were vital). Part of the problem may be that we didn't have anybody on staff whose job it was to write specifications, like program managers at Microsoft do. But part of the problem is certainly tools.

OSAF uses a Wiki for most of our current documentation, from proposals to user documentation to meeting notes to personal jottings. The wiki is very fluid and chaotic and not very authoritative. It seems a wiki can be made to be more authoritative -- the wikipedia does this by formalizing around a single kind of content -- but we can't easily do that because our wiki already contains such divergent content. Besides, a wiki isn't an ideal environment for long documents, for printing, for version control, or for template usage.

Length: Specs are ideally longer than the average wiki page. To fully describe a feature so that the different contributors actually agree (rather than just appear to agree), to document the decisions made, and to make the feature testable and documentable, might require a 30 page specification for a 1-dev-month feature. Note that for a long document readers require a table of contents -- ideally clickable if you're reading it on a computer, with page numbering if it's printed.

Printing: You really want to be able to print specs, even today. I'm tempted to bring printed specs to a meeting and ban laptops to get them reviewed properly. Reviewers can mark up paper. Implementors can glance between paper and screen (and write on the paper) as they work.

Version control: Specs should be versioned and checked in and out and maintained in a versioned repository, just like code. Some wiki software does this but it's not very good yet -- I couldn't view a version of this page from before the picture was added. HTML doesn't version very well in general partly because images are external. And if you break wiki pages into smaller-than-30-page chunks for readability, then the versioning gets worse.

Templates: For template support, I miss Word. When I worked at Microsoft, there were Word wizards who made beautiful, useful templates for many purposes, from expense reports to specifications. As a program manager, I wrote many specifications, and I found that a template somewhat like this one kick-started the process. Word also allows templates to contain "click to fill in" fields (like author, product, release, team information; spec status, implementation status) including support for drop-down boxes to limit selection to a few appropriate choices. My favourite spec template had sections for overview, administrative UI, setup concerns, test considerations, rough documentation plan, and internationalization issues. Each of these sections was a reminder that the feature being specified might have impact on other groups. When I joined a group that didn't have a template I made or modified one to start with because I found them so useful. The IETF has specification templates and they make for some consistency between vastly different protocols and authors. A familiar template vastly improves readability.

Am I missing something that would solve this problem in HTML or in another open format? Is there some easy-to-use tool or set of tools to solve this problem? I know I could write my own software to do this, and it wouldn't even have to be terribly complex -- no worse than the XML2RFC tool which produces IETF specifications matching the template. But that's harder both to write the template (and software to generate the document) and for the author who has to get the angly brackets right.

[Tangentially related: According to Mark's amusing post, as a spec reader, I'm a moron. I don't understand a spec merely by reading it through. After I've implemented a spec or tried to explain it to somebody else, or extend it or edit it, I begin to understand it.]

[UPDATE: Ekr and Sheila point out that a Wiki also sucks for entering text. It's not WYSIWYG, so the process of updating and viewing and correcting pages can be slow. Also you've got to be online to change a page. There's no way to synchronize parts of the Wiki for offline changes other than manual painful silliness. Ekr objected to the whole comparison in the first place. He said that evaluating a Wiki for writing specs was like evaluating a bunsen burner for baking bread.]

Friday, August 20, 2004

When I lived in Seattle I volunteered with a group that ran conferences for middle and high-school girls to learn about careers. The conferences were arranged so that each girl could pick a slate of careers (3 or 4 ) and go to workshops where there was supposed to be hands-on practice. For a lot of careers -- vet, doctor -- it's easy to bring in actual real tools and let the girls use them, so those make for easy workshops. It's not so easy for the software industry, however, as I learned when I tried to increase the participation from women at my company.

The obvious approach to teaching girls what it might be like to be a developer is to put them at a computer and show them code. Unfortunately that requires having access to a computer lab at the conference location and setting up all those computers with the same tools. I only saw this done once -- Shaula Massena ran a brilliant workshop where girls used Visual Basic to construct a little prank program they could install on Dad or Mom's PC at home. The program simly popped up a dialog saying click here to dismiss, only when you click "here" the dialog jumped somewhere else on the screen :) Shaula sent the girls home with floppies containing their program executable, very cool.

I eventually helped program managers, testers and tech writers come up with workshop plans that didn't require a computer lab and software installation. For testers, we brought some home appliances in -- mixer, hair dryer, toaster -- explained how to write a test plan, and asked the girls to execute the test plan. They explored edge cases like "what happens when you press the toast button when you're already toasting". The girls also always had fun criticizing the design and usability of the devices, which we encouraged.

For tech writers, the workshop involve writing instructions and then watch somebody follow those instructions to see how challenging it is to write clear instructions. I brought a pile of coloured paper and asked each girl to make a paper airplane (or other origami thing) and then on another piece of paper explain how to make the same airplane. Then the girls exchanged instructions and tried to follow somebody else's instructions. At the end we compared results between the original and the one made by following instructions. Here, fun was had throwing planes and decorating and naming them as well as constructing them. Some girls decorated their instructions pages too -- budding Web designers.

Finally, for program/product managers, my favourite workshop was "Design your own cell phone". I focused the discussion of features by introducing the concept of a target audience and a limited budget. What features are so important for your target audience that you just have to have them? The girls split into teams and came up with lovely ideas. Often, of course, the target audience was "teenage girls" and one group came up with a "privacy phone" that would stop your bratty brother from hearing any of your conversations or messages. But one group targetted old people and thought what it would take to handle a user with poor eyesight and hearing. And another group targeted young kids (or really, concerned parents of young kids) and designed a brightly-coloured egg-shaped phone, wearable on a lanyard around the neck, with only three programmed "send call" buttons so that the kid could only call appropriate authority figures to arrange pick up time, report an emergency, and so on. The girls thought that the phone should also have toy-like features so that the kid would have something to play with besides calling Mom, Dad and Emergency, so they thought there could be a couple more buttons to play ringtones or record and playback thoughts.

For six years I've thought that the kidphone would in fact make a cool product. Finally I find validation via Gizmodo: the MYMO phone for kids. I should have hired that team of girls and founded a company.

Sunday, August 15, 2004

Everybody is knitting ponchos this summer, so I had to as well.

Wednesday, August 11, 2004

If this background material is familiar to you, scroll to the bottom for the call to arms.

I've been following calendaring standards for six years now. In that time, iCalendar has become a fairly widely accepted standard for expressing events, todos and other calendar information in a MIME text document. Many calendaring applications can import iCalendar files, or export to them, or generate an email with an invitation formatted as an iCalendar document. However, there are some problems with iCalendar's complexity, particularly in expressing recurrances, and the companion standards for conveying iCalendar files (iMIP and iTIP) have their own problems.

Calendar interoperability testing has happened sporadically in these years. The first calendar interoperability event was held in 2000, a virtual event in 2002 and another in-person event this summer at Berkeley, organized by the newly revitalized CalConnect consortium. Still, interoperability hasn't improved as much as we'd like because at some point you need to go back and fix the standard.

Also during these six years, the CalSch working group has worked and bickered over the proposal to standardize how clients access a calendar stored on a server -- this protocol, CAP, would be to calendars what IMAP is to email folders. I've never liked the design of CAP, down to the very core model of how it works. So six months ago I made a counter-proposal, CalDAV, and threw it out there as an internet-draft. Finally I'm getting more than just nibbles on CalDAV, in fact several groups have told me their intent to implement CalDAV starting this year. And at the same time, other folks are getting invigorated about revising iCalendar and bringing it to draft standard.

This is all great news. Calendaring interoperability has languished except for that burst of productivity back in 1998. People are locked into one calendar application depending on what server technology they have available, since there's no common calendar access standard. Invitations work, kinda, but in practice the problems with recurrances mean that people must follow up the machine-readable text with human-readable text in case a mistake was made between two different vendors' software.

Good news, but nowhere near done yet -- this is just the beginning. Now we need volunteers. We need people to write specification text, review text, and manage the issues. We need people simply to pay attention to the work being done and provide their experience, or simply opinions, to bring issues to resolution.

Here's where to start -- two new mailing lists, one to take iCalendar to draft standard, one to develop and standardize CalDAV. Tell your friends! Subscribe to one, subscribe to both! We've got good work to do and good people to do it but we need your help.

Monday, August 09, 2004

Tim Bray, a veteran of many other Internet/engineering communities, is a newcomer to the IETF, and was slightly disturbed by some of what he saw. With open mind, he attended IETF meetings, both a few official in-person meetings and the meeting Saint Peter calls the "undernet" of the plenary, the badattitude jabber room (named after a tradition of corporate criticism).

Like Saint Peter, I'm not concerned about what Tim Bray calls the "severe angst" that appeared in badattitude. I think when you invite a group of people together to complain and rant, they do so, attempting to be humorous and entertain each other, at the expense of reflecting the entire reality and balance of their opinions. In fact, several of us have noticed that ever since badattitude started existing during IETF plenaries, there's much fewer dissent voiced at the microphone. Badattitude only has about 60 people and there can be 600 in the actual plenary room so it's hard to believe that allowing 10% of the people to let off steam can directly reduce the microphone usage by a noticeable amount (clearly more than 10%). Theories include:
  • Uncorrelated -- people stopped complaining during open-microphone for other reasons.
  • Tipping point -- the few people whose frustration was reduced through badattitude, and didn't go to the microphone, brought microphone usage below some tipping point, influencing others not to complain as well.
  • The Squawker Squad -- the possibility that the people I know happen to be those most likely to complain, and by drawing them into badattitude I induced a significant percentage of the bellyachers to give their opinions elsewhere.
The first theory is the most likely but I prefer the last (/me grins).

I was supposed to have a full transcript of the badattitude jabber room due to the automatic logging functionality of the jabber client, but nitro ate the chat log. Really -- it's gone, except for the first five minutes. Honest. Anyway, some folks logged in under their real names and might not like it published.

Friday, August 06, 2004

WebDAV and Wiki get compared frequently. They're often in frameworks that solve the same problem, but they play different roles within those frameworks.

A Wiki is a Web site running Web application code that allows people to browse Web pages, easily add new Web pages, and easily edit any Web page through Web forms (the Web application code powers those Web forms and handles their input). Originally Wiki described the first such application but the word has spread and similar applications that run completely different code are also called Wikis.

WebDAV is an IETF Proposed Standard, RFC2518, describing a set of extensions to the HTTP protocol. These extensions allow a client application (or other software agent) to create new HTTP resources more easily, manage them, find them without following hyperlinks, and change them in a way that multiple authors can work together simultaneously.




























Wiki uses HTTP forms to transport requests WebDAV extends HTTP by defining new methods.
Wiki is a UI. WebDAV is completely orthogonal to a UI.
There exist a number of different Wikis, implemented in different languages, supporting different features. There are a lot of different WebDAV servers as well as WebDAV clients. The clients can theoretically all work against any server although in practice there are a few outliers (special purpose clients or servers) that don't interoperate so well with the main group.
Wiki is a thin-client application, where any browser can be the client because the smarts are on the server (the code running the Web forms). WebDAV was created to allow rich clients to author Web resources. A WebDAV client is definitely the entity in control -- but you have to download or find WebDAV client software. Luckily, these clients ship now in all Windows and Mac OS's.
Wiki is a generic name that seems to be used if something is similar to original Wiki WebDAV is a set of requirements, so a WebDAV implementation can be tested for compliance.
You'd probably not call something a Wiki, if it didn't enable multiple users and allow easy creation of new pages, and result in a Web site collection of human readable pages. WebDAV can be used to do synchronize with a source code repository, or to change configuration files, or to access a calendar server, or many more things that you would never call "shared authoring".
It's easy to extend (a) Wiki, just code in the new feature -- e.g. search. It's hard to extend the WebDAV standard, it requires IETF drafts, reviews, and many revisions in the IETF doc format. It can take years -- e.g. SEARCH, begun around 1998. OTOH it's easy to extend a WebDAV server -- e.g. Xythos added anonymous permission tickets to the WFS server -- but no other clients are likely to interoperate with a custom feature like this for a while.


Anyway, in this case, the different roles mean that Wiki and WebDAV are not at all incompatible, and to compare their advantages and disadvantages as if they are both apples can be misleading. Brad, whom I met at a party last week, realizes this: he added WebDAV support to an existing Wiki codebase. It allows another way to add pages to a Wiki or to modify them, and can interact completely smoothly with the existing WebUI. You'd think somebody would have done this before now, but I can't see that anybody has. Way to go Brad.

[UPDATE Aug 9: clarified "extend webdav" to differentiate between extending the standard, and extending a server]

Blog Archive

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.