Category Archives: Academic Papers

Academic Papers Artwork baby names Blog blogging democracy Design ethics Facebook firefox Flickr folksonomies Google Google Docs Google Spreadsheets how-to information-architecture information-retrieval information design internet iphone journalism listserv mailing list maps mass media Online News Papers Photography plugin poll social-bookmarking social networking social software spam tagging trust Twitter Usability web-development Web2.0 webspam web standards WordPress Writing

Tagging and Folksonomy artcle in the ASIST Bulletin

Walking to the overlook  The issue has been our for a little while now, but I thought I would note that I have an article about The use of tagging systems in this month’s issue of the ASIST Bulletin. Take a look at Why Are They Tagging, and Why Do We Want
Them To?

Almost everyone has a tagging system the web is facing serious weather with tag clouds on every site. I think it’s interesting to explore the uses of folksonomies and why users bother tagging things in the first place. Here’s an excerpt:

When thinking about adding tagging to a site, the first question should be: What do we want to get out of this? Does the site need something to improve search results or a new navigational facet to better connect related pages? Is the goal to classify lots of multimedia objects with minimal cost or to get users to interact with the site a little more?

Tagging and Searching: Search Retrieval Effectiveness of Folkonsomies on the World Wide Web

To complete my MS in Information Architecture and Knowledge Management at Kent State I did some research on folksonomies and how the can support information retrieval.  I compared social bookmarking systems with search engines and directories.  I’m hoping to see the results published in an academic journal.   In the mean time, you can see a pre-publication copy of my results:

Tagging and searching [pdf, 989K]

Formal usability testing with eye tracking – Mealographer

Usability Testing

Usability tests can be seen to fall into two general categories, based on their aim: tests which aim to find usability problems with a specific site, and tests which aim to prove or disprove a hypothesis. This test would fall into the former category. A search of the literature will reveal that tests looking to uncover specific usability problems often use a very small number of participants, coming from Nielsen’s (2000) conclusion that five users is enough to find 85 percent of all usability problems. Nielsen derived this formula from earlier work (Nielsen and Landauer, 1993). Although there is much disagreement (Spool and Schroeder, 2001), this rule of thumb has the advantage of fitting the time and money budget of many projects.

Use of Eye-Tracking Data

In terms of raw data, eye tracking produces an embarrassment of riches. A text export of one test of Mealographer yielded roughly 25 megabytes of data. There are a number of different ways eye tracking data can be interpreted, and the measures can be grouped into measures of search and measures of processing or concentration (Goldberg and Kotval, 1999):

Measures of search:

  • Scan path length and duration
  • Convex hull area, for example the size of a circle enclosing the scan path
  • Spatial density of the scan path.
  • Transition matrix, or the number of movements between two areas of interest
  • Number of saccades, or sizable eye movements between fixations
  • Saccadic amplitude

Measures of processing:

  • Number of Fixations
  • Fixation duration
  • Fixation/saccade ratio

In general, longer, less direct scan paths indicate poor representation (such as bad label text) and confusing layout, and a higher number of fixations and longer fixation duration may indicate that users are having a hard time extracting the information they need (Renshaw, Finlay, Tyfa, and Ward, 2004). Usability studies employing eye tracking data may employ measures that are context-independent such as fixations, fixation durations, total dwell times, and saccadic amplitudes as well as screen position-dependent measures such as dwell time within areas of interest (Goldberg, Stimson, Lewenstein, Scott, and Wichansky, 2002).

Because of the time frame of this investigation, the nature of the study tasks, and the researcher’s inexperience with eye tracking hardware and software, eye tracking data was compiled into “heat maps” based on the number and distribution of fixations. These heat maps are interpreted as a qualitative measure.

Continue reading