Category Archives: Usability

Academic Papers Artwork baby names Blog blogging democracy Design ethics Facebook firefox Flickr folksonomies Google Google Docs Google Spreadsheets how-to information-architecture information-retrieval information design internet iphone journalism listserv mailing list maps mass media Online News Papers Photography plugin poll social-bookmarking social networking social software spam tagging trust Twitter Usability web-development Web2.0 webspam web standards WordPress Writing

Notes: Web site usability, design, and performance metrics

Palmer, J.W. (2002). Web site usability, design, and performance metrics. Information Systems Research, 13(2), 151-167.

In this study Palmer looks at three different ways to measure web site design, usability and performance. Rather than testing specific sites or trying out specific design elements, this paper looks at the validity of the measurements themselves. Any metrics must exhibit at least construct validity and reliability—meaning that the metrics must measure what they say they measure, and they must continue to do so in other studies. Constructs measured included download delay, navigability, site content, interactivity, and responsiveness (to user questions). The key measures of the user’s success with the web site included frequency of use, user satisfaction, and intent to return. Three different methods were used: a jury; third-party rankings (via Alexa), and a software agent (WebL). The paper examine the results of three studies, one in 1997, on in 1999, and one in 2000, involving corporate web sites. The measures were found to be reliable, meaning jurors could answer a question the same way each time, and valid, in that different jurors and methods agreed on the answers to questions. In addition, the measures were found to be significant predictors of success.

This is an interesting article because in my experience, usability studies are often all over the place, with everything from cognitive psychology and physical ergonomics to studies of server logs to formal usability testing to “top ten usability tips” lists. Some of this can be attributed to the fact that it is a young field, and some of it is due to the different motive fueling research (commercial versus academic). One thing in the article I worry about, however, is any measure of “interactivity” as a whole. Interactivity is not a simple concept to control, and adding more interactivity is not always a good idea. Imagine a user trying to find the menu on a restaurant’s web site—do they want to be personally guided through it via an interactive Flash cartoon of the chef, or do they want to just see the menu? Palmer links interactivity to the theory of media richness, which has a whole body of research behind it that I am no expert on. But I would word my jury questionnaires to reflect a rating of appropriate interactivity.

The most important impact of this study is that it helps put usability studies on a more academically sound footing. It is very important to have evidence that you are measuring what you think you are measuring. It would be interesting to see if other studies have adopted these particular metrics because of the strong statistical evidence in this study.

The most straight-forward metric, download delay, is also one that has been discounted lately. The thought is that with so many users switching to broadband access, download speed is no longer the issue it used to be. This is especially false for sites with information seeking interfaces, which are often very dynamic and rely on database access. No amount of bandwidth will help if your site’s database server is overloaded.

Notes: Why are online catalogs still hard to use?

Borgman, C.L. (1996). Why are online catalogs still hard to use? Journal of the American Society for Information Science, 47 (7): 493-503. 

In this 1996 study, Borgman revisits a 1986 study of online library catalogs. In the original study, computer interfaces and online catalogs were still fairly new—the study looked at how the design of traditional card catalogs could inform the design of new online catalogs. By the time of this study online catalogs were common but still not easy to use. Three kinds of knowledge were seen as necessary for online catalog searching: conceptual knowledge about the information retrieval process in general, semantic knowledge of how to query the particular system, and technical knowledge including basic computer skills. Semantic knowledge and technical knowledge differ here in the same way as semantic and syntactic knowledge in computer science. The study also covers specific concepts like action, access points, search terms, boolean logic, and file organization. In the short term, Borgman recommends training and help facilities to help users gain the skills they need to use current systems. In the long run, though, libraries must employ the findings of information-seeking process research if they are ever going to create usable interfaces.

The study does point out a number of reasons why online catalogs are difficult for users, whether it’s because they lack computer skills or semantic knowledge. One good example is from a common type of query language. Even if the user knows that “FI” means “find” and “AU” means author, they may not know whether to use “FI AU ROBERT M. HAYES,” “FI AU R M HAYES,” “FI AU HAYES, ROBERT M,” etc., and how the results will differ. Unfortunately the article lacks clear instructions or examples of how to make the systems better. The conclusion that different types of training materials could be helpful seems to me like a bandage rather than a cure.

I think a lot of the criticisms are still true, but that modern cataloging and searching systems have become easier. I’m not so sure it’s because catalog designers have started applying information-seeking research in their interfaces, though. It almost seems like library systems are being made easier in self-defense. Users are getting more and more used to a Google or Yahoo type interface—a simple search box that looks at full text and uses advanced algorithms to find relevant results. I think part of this is due to the fact that people in the library field have experience with complicated, powerful structure search systems and are used to a lot of manual encoding of records. Web developers, lacking this background, have been more free to think in terms of searching massive amounts of unstructured data and automating the collection and indexing process. I also think that simple things such as showing the results, including summaries of each item, in a scrollable, clickable list, have helped a great deal to support the information seeking process. Things like search history and “back” and “forward” buttons, “search within these results,” automatic spell checking, etc. are becoming pretty standard as well.

Usability test of the Kent State IAKM home page

Note: this report shows the results of a usability test of the Information Architecture and Knowledge Management program web site at Kent State University in 2003. The site has since been redesigned.

1. Introduction

In usability study of the IAKM web site I found a number of serious problems. Current IAKM students were asked to complete a series of tasks using the site. Although participants were able to complete the tasks 91.67 percent of the time, they met all performance goals for each task only 36.11 percent of the time. The site is not fundamentally broken, but clearly there is room for improvement. Through statistical analysis, observations of the students, and remarks made by the students a number of issues were uncovered.

Many of the problems were global problems with site navigation and labeling, but there were also a number of prominent local problems. The severity of problems were rated via three categories:

  • Severe—prevents the user from completing a task or results in catastrophic loss of data or time.
  • Moderate—significantly hinders task completion but users can find a work-around.
  • Minor—irritating to the user but does not significantly hinder task completion. (Artim, 1).

Problems are also rated by scope. Any problem can be either global, meaning it applies to most pages or the site as a whole, or local, meaning it is particular to a page or specific section. Global problems are generally more pressing than local ones.

Findings are presented first in order of importance, followed by a description of the study methods.

  Continue reading