Category Archives: Writing

Academic Papers Artwork baby names Blog blogging democracy Design ethics Facebook firefox Flickr folksonomies Google Google Docs Google Spreadsheets how-to information-architecture information-retrieval information design internet iphone journalism listserv mailing list maps mass media Online News Papers Photography plugin poll social-bookmarking social networking social software spam tagging trust Twitter Usability web-development Web2.0 webspam web standards WordPress Writing

Notes: Looking for information

Case, D.O. (2002). Looking for information: A survey of research on information seeking, needs, and behavior.  New York: Academic Press.  Chapter 9: Methods: Examples by type.

In this chapter Case reviews the different methodologies employed by research studying information seeking, use, and sense-making. Although he notes a few overall studies that cast a wide net, finding overall proportions, this article is not a survey of all the literature. It instead gathers relevant examples of different types of research. The types of research included case studies, formal and field experiments, mail and Internet surveys, face-to-face and phone interviews, focus groups, ethnographic, and phenomenological studies, diaries, historical studies and content analysis. The were also multiple-method studies and meta-analysis. Case writes about some of the limitations of the different methodologies—for example, case studies have limited variables, focusing on one item or event to the exclusion of others, and they are limited in terms of time as well. The author concludes that most studies assume people make rational choices and that specific variables are more important than context. More qualitative measures are becoming more popular but cannot be generalized.

The author did a particularly good job in finding studies to examine. The best example of this are the experiments. Very few laboratory experiments have been conducted specifically on information use, but there have been many on consumer behavior—and here we consumer behavior studies that involved information gathering for decision making. Another choice I found particularly interesting was the historical research by Colin Richmond that looked at the dissemination of information in England during the Hundred Years’ War. Usually when I think of historical research in social science I think of things like comparing content analysis of newspapers of the 1950s and today. It was interesting to see thing from a historian’s point of view, and also a good reminder that people did not just start needing information with the invention of the Internet. A good, though dense, book on this topic is A Social History of Knowledge by Peter Burke.

The most immediate application of this chapter is in suggesting methodologies to use in different situations. When I’m doing research, I tend to have a bias toward sources that conducted experiments or did survey research. Reading through these cases reminded me of the usefulness of things like case studies and content analysis. Another interesting application of the chapter is in suggesting topics for further study. Although the author doesn’t really build to any general conclusion on the research topics at hand (there is no overall theme to the research) looking at the different conclusions of the different types of studies suggests some interesting questions. For example, since the study by Covell, Uman and Manning suggested that doctors report using books or journals first but in reality turn to colleagues first, how can we reexamine the studies that relied on self-reporting, such as the case study or the surveys? Perhaps some of the tactics used in the consumer research experiments would be a valuable addition.

Notes: Helping people find what they don’t know

Nicolas J. Belkin, Helping people find what they don’t know, Communications of the ACM, v.43 n.8, p.58-61, Aug. 2000

In this article, Belkin argues that since people generally start searching for information when they don’t know much about a subject. It is therefore problematic that many search systems require knowledge of the domain in order to get good results, for example when users do not know either the specific keywords or controlled vocabulary of the system. His group feels that the best way around this is for the system to make suggestions along the way. There are two techniques that can be used: system-controlled, where the user’s query is enhanced automatically by the system using algorithms like word frequency, and user-controlled, where the user is given the results of their query along with suggestions to make it more effective. The author’s team found that suggestions were most effective when the user was able to control which suggestions were used and when the user knew how the suggestions were generated and was comfortable with the results.

The author’s findings seem both intuitive and promising. It makes sense that in an interactive structured searching system giving the user suggestions and allowing them to take them or leave them would work well, and the suggestions should neither be bizarre or mysterious. But with the rise of the World Wide Web, I think it’s pretty clear that users with less domain knowledge prefer less-structured searching environments. In my experience, users who are new to a system will type unstructured, keyword queries into anything that even looks like a search box, even if it is clearly labeled as a field for author name, product code, or start date. Power users, on the other hand, often have more knowledge about the data then the system’s programmers—so for these sorts of suggestions to be useful, the algorithm would need to do more than just call up synonyms. The article makes it clear that these findings are early, so I would be interested to see what they have come up with since 2000.

These ideas could be applied to both structured and unstructured searching environments, though my guess is that they would be easier to implement in more structured environments because the structure of the system can be used to generate the suggestions. There certainly have been a number of projects which have tried to provide something like this with general web searching. Rudimentary systems like Google Suggest  or more advanced ones like Teoma show off the potential. Notice, however, that neither of these has exactly taken the search engine industry by storm, meaning people are apparently happy to muddle along with plain keyword searching and advanced ranking algorithms. I do wonder if their finding that users liked to have some idea about how suggestions were found would apply here as well—would users be happier with Google if they were told why PageRank picked a certain site as the number one result? Since the algorithms used by Google, Yahoo, MSN and others are trade secrets I doubt we’ll see anything like that in the near future. On the other hand, Amazon.com’s recommendation engine does tell the user why a certain book was suggested, and allow the user to remove certain suggestions. Although it is not really a search tool, it follows the precepts discussed here and seems to be successful.

The information economics of price aggregation web sites

Introduction

Just as the Internet has had an impact on the market for information goods and services, it has also had an impact on the information necessary for markets to function. Perfectly competitive markets, upon which models of economics are based, require four key characteristics:

  • Many sellers.
  • Nearly identical products.
  • Easy market entry (and exit).
  • Buyers and sellers have perfect information.

The last point is possibly the most difficult. Good information is hard to come by, let alone perfect information, for both buyers and sellers. Buyers are perhaps at a disadvantage, but the rise of the online marketplace and specifically price aggregating web sites has created an interesting change.

Continue reading