Category Archives: Writing

Academic Papers Artwork baby names Blog blogging democracy Design ethics Facebook firefox Flickr folksonomies Google Google Docs Google Spreadsheets how-to information-architecture information-retrieval information design internet iphone journalism listserv mailing list maps mass media Online News Papers Photography plugin poll social-bookmarking social networking social software spam tagging trust Twitter Usability web-development Web2.0 webspam web standards WordPress Writing

A user-centered redesign of the Kent State SLIS site

Note: This was originally created for an information architecture class – the project was to redesign the Kent State School of Library Science web site. You can also see a usability study of the site.

Executive Summary

The current Kent State University School of Library Science (SLIS) does not meet the needs of the department. This project outlines a plan and strategy for designing a new site. The new site will better communicate the department’s image and core attributes to the outside world and better meet the needs of users. This report covers the entire process, from research and project goals, through the development of a new design and how to measure success. Major recommendations include the use of a simple content management system (CMS), a new navigation structure and graphic design, and a few new content elements such as news, video, and podcasts.


This report will cover the overall strategy for the redesign of the Kent State University SLIS web site, including the site’s audiences, the vision for the site, and analysis of the content and maintenance. Finally, recommendation are made for the content, information architecture, and design of the new site. The ultimate goal of this project is to create a coherent analysis and plan for the SLIS department to execute. The result will be a site that better projects the image of the department, better serves the users, and, if possible, makes the staff’s job a little bit easier.

Site content has been updated, but the organization and design of the site has been the same since 2000. The web has changed a great deal in the last 5 years, and the Kent SLIS site look and feel is not exactly cutting edge. The faculty and staff have voiced a desire to update the site, and there is anecdotal evidence that at least some students find the site lacking. Any new design must better address the needs of the site’s audiences and should better project the image of the department to the outside world. Also, the process used to update the current site is slow and unwieldy. The new site will solve three main problems: poor ease of use, an image that does not fit the department, and difficulty updating the site and communicating with users.

The process followed in creating this report has included requirements-gathering meetings with SLIS faculty and staff, content analysis of the current site, analysis of server logs, brainstorming sessions with Information Architecture Knowledge Management (IAKM) students, analysis of similar sites, academic usability research, the creation of persona, card sorting exercises, wireframing, prototyping and other techniques. The report will recommend additional steps such as formal usability testing be taken as well.

Continue reading

Notes: Bias in computer systems

Friedman, B., & Nissanbaum, H.  (1996). Bias in computer systems.  ACM Transactions on Information Systems, 14(3), 330-347.


In this article Friedman and Nissenbaum look at bias in software systems. Although the word bias can cover a number of related concepts, the definition used here is something that systematically and unfairly discriminates toward one party or against another. The authors see three main classes of bias in computer systems: Preexisting bias, when an external bias is incorporated into a computer system, either through individuals who have a hand in designing the system or via the society the software was created in; Technical bias, where technical considerations bring about bias (from limitations, loss of context in algorithms, random number generation, or formalization of human constructs); and Emergent bias, where bias emerges after design when real users interact with the system (for example, when new information is available but not in the design, or when systems are extended to new user groups). A number of illustrative examples are given, and the authors look at a number of specific software systems and point out existing or potential biases. One of the systems is the The National Resident Match Program (NRMP), used to match med school graduates to hospitals. In this system, if a student’s first choice of hospital and hospital’s first choice of student do not match, the students’ second choices are run against the hospitals’ first choices. Overall, the result favors the hospitals. Two steps are proposed to rectify bias – diagnosis and active minimization of bias.

This is an extremely interesting subject, and and I doubt most users and programmers are any more aware of it now than they were in 1996. One more recent article, ( which sought to turn literary criticism toward video games by pointing out cultural biases, also mentions the lack of study in this area. With so many people spending so much of their day interacting with software, why do these kinds of articles seem so few and far between? On the other hand, the particular examples chosen are illustrative but not very current. All three of the systems were large-scale, mainframe-type software that users interacted with in a very small sense. Would the risk of bias be even greater for a system which is largely a user interface?

One clear implication is shown in the diagnosis stage of removing bias—to find technical and emergent bias, designers are told to imagine the systems as they will actually be used and as additional user groups adopt them, respectively. So the charge is one-third ‘know thyself’ and two-thirds ‘know the users.’ The very notion of looking for bias is probably foreign to many user interface designers (in fact, few of the programmers I’ve met are even aware that accessibility guidelines exist for blind, deaf, and other users). The authors’ proposal that professional groups offer support to those designers who detect bias and wish to fight it is a nice thought but doubtful. Few programming or UI organizations can exert any kind of pressure or drum up much bad publicity, or if they can, I haven’t heard of it (which I suppose means they can’t).

Notes: Web site usability, design, and performance metrics

Palmer, J.W. (2002). Web site usability, design, and performance metrics. Information Systems Research, 13(2), 151-167.

In this study Palmer looks at three different ways to measure web site design, usability and performance. Rather than testing specific sites or trying out specific design elements, this paper looks at the validity of the measurements themselves. Any metrics must exhibit at least construct validity and reliability—meaning that the metrics must measure what they say they measure, and they must continue to do so in other studies. Constructs measured included download delay, navigability, site content, interactivity, and responsiveness (to user questions). The key measures of the user’s success with the web site included frequency of use, user satisfaction, and intent to return. Three different methods were used: a jury; third-party rankings (via Alexa), and a software agent (WebL). The paper examine the results of three studies, one in 1997, on in 1999, and one in 2000, involving corporate web sites. The measures were found to be reliable, meaning jurors could answer a question the same way each time, and valid, in that different jurors and methods agreed on the answers to questions. In addition, the measures were found to be significant predictors of success.

This is an interesting article because in my experience, usability studies are often all over the place, with everything from cognitive psychology and physical ergonomics to studies of server logs to formal usability testing to “top ten usability tips” lists. Some of this can be attributed to the fact that it is a young field, and some of it is due to the different motive fueling research (commercial versus academic). One thing in the article I worry about, however, is any measure of “interactivity” as a whole. Interactivity is not a simple concept to control, and adding more interactivity is not always a good idea. Imagine a user trying to find the menu on a restaurant’s web site—do they want to be personally guided through it via an interactive Flash cartoon of the chef, or do they want to just see the menu? Palmer links interactivity to the theory of media richness, which has a whole body of research behind it that I am no expert on. But I would word my jury questionnaires to reflect a rating of appropriate interactivity.

The most important impact of this study is that it helps put usability studies on a more academically sound footing. It is very important to have evidence that you are measuring what you think you are measuring. It would be interesting to see if other studies have adopted these particular metrics because of the strong statistical evidence in this study.

The most straight-forward metric, download delay, is also one that has been discounted lately. The thought is that with so many users switching to broadband access, download speed is no longer the issue it used to be. This is especially false for sites with information seeking interfaces, which are often very dynamic and rely on database access. No amount of bandwidth will help if your site’s database server is overloaded.