Thursday, December 20, 2007

RSS is *everywhere*

UPDATE: for some reason not all the images come through in my feed reader; there should be three images below: two smaller ones side-by-side, and then the larger one that is a larger shot of the drinking fountain top.
***

First there was the coffee cam, then the internet-enabled refrigerator, now there are water fountains with RSS feeds:


Separated at birth?



Monday, December 17, 2007

Misuse of the progress bar in web applications

A while back it became fashionable to have animations appear in or layered on top of web pages during page transitions to indicate that the user's request was being processed and that the page was not just sitting there waiting for input or locked up. That's all well and good, except for one thing: it doesn't mean anything like "your request is being processed." The indicator is just an animation that is preprogrammed to cycle through the same set of images until the next page loads, giving the appearance that something is happening, but all it really means is that the animation loaded. It's the web version of "this page intentionally left blank," but worse, because it is misleading rather than just paradoxical.

There is no question that feedback improves usability, so it is the right thing to do to visually advise the user that the application knows a button has been clicked, etc. But, there is one type of status indicator that has no place on a web page: the segmented progress bar. These are a carryover from desktop apps where the bar really does indicate the relative completeness of a process, but they don't work in a web page (they work fine for browser status, but that's different). Unlike desktop apps, web apps are not wired into the events as they are taking place and so they are meaningless.

The problem developers and designers are faced with is that the bar can't just move from left to right once and then stop because the process may take more or less time than that to complete. Clever folks (not good clever) have taken to making the bar cycle repeatedly from left to right, as if that makes any more sense, unless you are a fan of Battlestar Galactica or Knight Rider.

Some alternatives I have seen include a message in the status bar of the browser itself, which is hard to see and easy to ignore; the blinking dot, which is old school in a command-line sort of way (or in the way of the highway information sign when it contains no accident information or fictitious travel times).

The best I have seen are animated icons that are circles or spheres of some kind that have a piece of their image that moves around it, such as the belly-button ring on the Nintendo Wii (take a look, you'll see what I mean), or even the beachball in Windows Mobile apps. In the case of the Wii, the indicator is a simple circle that has a dot moving around it, so there is no question that something is happening. Likewise with the beachball, it is simple but effective, because a circle has no starting point and no ending point and so there is no problem of trying to decide what to do after it cycles the first time.

I probably won't be able to pry the nearly useless status indicators out of web developers' hands, but please, please, please get rid of the Cylon eyes!

Monday, December 10, 2007

Linq + Anonymous Types = bliss

If you've ever been faced with the task of reading in a list of object properties from XML and instantiating objects with the node text, you know that while it is relatively straightforward on the surface, it still can be a lot of hassle to create the DOM object, work up the XPath to get what you want, create a strongly-typed object to hold the properties you are pulling, and test and convert the node inner text and stuff it into the object.

I have been following along Scott Guthrie's description/tutorial of the new ASP.NET MVC Framework and ran across a little tidbit that made the job of using XML in my application a whole lot easier: using anonymous types and the results from a Linq query to populate them. Here's an example:


XDocument buildFile = XDocument.Load(args[0]);

var builds = from file in buildFile.Descendants("file")
select new
{
SourcePath = file.Element("sourcePath").Value,
TargetPath = file.Element("targetPath").Value,
ReplaceToken = file.Element("replaceToken").Value,
ReplaceWith = file.Element("replaceWith").Value
};

foreach (var file in builds)
{
ReplaceInFile.Replacer replacer = new Replacer(
file.SourcePath,
file.TargetPath,
file.ReplaceToken,
file.ReplaceWith);

string contents = replacer.Replace();

Utilities.SaveFile(file.TargetPath, contents);
}


The anonymous type comes in really handy here because I only need it for a very limited purpose and will not need it anywhere else. Previously I would have created a new class to hold the properties, but now I can do it on the fly.

Maybe calling it blissful is a bit much, but it definitely is an improvement over the old way of doing things.

Thursday, December 06, 2007

Using gmail chat for AIM contacts

More cool features from google chat: you now can log into AIM and chat with your AIM contacts from google chat. You still need to have an AIM account, but you no longer need to use the AIM client to talk to your contacts on that service.

It's about time, too. Google chat is a Jabber client, and there are transports out there that bridge between Jabber and other chat protocols, but for some reason google does not have servers running those transports. Even though you can do it using other servers they are a pain to configure (you can't do it through any google client or settings page, you have to download another client (like Psi), find your way to a server of questionable security and availability, and perform some voodoo to get everything working right) and even then it is iffy.

I have long wondered why google has not set up transports and whether they someday will, and it looks like this may be a step in that direction. I don't yet know whether they are using the XMPP transports or some other method to do the bridging, but whatever they are doing I hope they extend it to ICQ (really an AOL property), MSN, and Yahoo! so I can ditch my desktop chat clients entirely.

Wednesday, December 05, 2007

Visual Studio 2008

I installed Visual Studio 2008 yesterday and have been itching to get my feet wet with it. There are a lot of cool features in the 3.5 framework, and a lot of improvements in the development environment as well. One feature I am particularly interested in is multi-targeting, which gives you the option to choose a target framework (and hence deployment environment) other than the default 3.5 version.

To me this is a huge improvement. It's one thing to fire up a new version of VS and crank out some sample projects, but as soon as you want to deploy them into an environment that does not yet support the newest framework version you are stuck. What ends up happening is a sort of dual existence between the old and new version, with existing code dragging you back into the old version of the framework for way longer than you really want to be there. The promise of multi-targeting is that you immediately reap the benefits of the new IDE and some of the framework classes while still being able to work on projects that are targeted for a previous framework version.

My first experience with 2008 was to convert an existing 2.0 ASP.NET solution that has five projects. I knew you could choose a target environment for new projects but I was not sure how it would work converting an existing one; it turned out to be no problem:



I chose not to upgrade because I want to be able to deploy the app on my current 2.0 host. The conversion succeeded, the app built on the first try, and it ran without a problem. Next came the real test: push it out to my web host, which is running the 2.0 framework, and see if there are any hiccups. I am happy to report that so far I have not run into any problems, and I am looking forward to really digging in to see what 2008 has to offer.

A minor request to the blogging world Part 2

Sarge asked the question in the comments "Do you click-through" to blogs who only provide summaries in their feeds? My answer at the time was that I don't bother clicking through, mainly because it's a hassle, the subtext being that it is counter the spirit of a published feed to dictate to me how I consume it by forcing me to go to the site to view junkloads of ads along with the blog content.

Well, it appears I have to eat my words for The Dilbert Blog, by Scott Adams. Recently Adams changed his feed settings to only show a few words from the first sentence of each post, down from full posts as it had been for the two previous years he has been writing the blog. A couple of days prior to making the change he blogged that people only come to the actual site long enough to find the link to the RSS feed, his feeling being they are doing so to avoid the ads on the site. I am curious to see whether he provides an update later about site visits since making the change to the feed settings.

It would not be difficult technology-wise to stick ad content (text, images, whatever) into feed content and I won't be surprised if it starts happening, although I suspect the feed-o-sphere will be up in arms about it if it does.

Monday, December 03, 2007

What does automated testing miss?

A friend of mine once said that you can judge the intelligence of a driver of a car by the number of stickers plastered to the back of it. He asserted that the relationship is inversely proportional, which I found to be insanely hilarious at the time, and I still think it is a clever turn of phrase even though essentially it is an unprovable generalization. The point is that saying "support world peace" fifty different ways doesn't make you a better person, and it often has the opposite effect of betraying a lack of aesthetic and poise.

Joel on Software gave a talk at the Yale Computer Science department recently wherein he made what I perceived to be a similar assertion about the relationship between the attention given to certain classes of bugs and the ability to test for them automatically:

And so one result of the new emphasis on automated testing was that the Vista release of Windows was extremely inconsistent and unpolished. Lots of obvious problems got through in the final product… none of which was a “bug” by the definition of the automated scripts, but every one of which contributed to the general feeling that Vista was a downgrade from XP. The geeky definition of quality won out over the suit’s definition; I’m sure the automated scripts for Windows Vista are running at 100% success right now at Microsoft, but it doesn’t help when just about every tech reviewer is advising people to stick with XP for as long as humanly possible.


Whether that is true in the case of Windows Vista is open for debate, but the notion that 100% successful unit tests may still leave bugs and internal consistencies is one that has been on my mind a lot lately. Joel makes the point elsewhere in his talk that writing good code depends on writing good specs, and writing good specs is just as hard as writing good code. Writing good tests is equally as hard, but for some reason I think there is a belief out there that as long as you have lots of them, successful unit tests make your code good.

If something is difficult to unit test, such as stored procedures and user interface, a couple of things happen: First, a subtle prejudice develops against those areas (if they can't be or just aren't tested, they must not be very important). Second, the wrong tool for the job gets used -- SQL finds its way into code instead of living in stored procedures, UI issues are deemed trivial so long as the feature being implemented at least works, etc.

It would be a mistake to argue that unit testing should not be done for these reasons, but there needs to be a greater awareness of the relative importance of other types of testing as well. We should take care not to think we have done a complete job of testing just because we can point to a huge stack of tests that only test one area of an application. Perhaps it would be wise to spend a little less time copying and pasting the same test multiple times and changing one piece of it and a little more time making a comprehensive test plan.