Notes: Bias in computer systems

Friedman, B., & Nissanbaum, H.  (1996). Bias in computer systems.  ACM Transactions on Information Systems, 14(3), 330-347.

 

In this article Friedman and Nissenbaum look at bias in software systems. Although the word bias can cover a number of related concepts, the definition used here is something that systematically and unfairly discriminates toward one party or against another. The authors see three main classes of bias in computer systems: Preexisting bias, when an external bias is incorporated into a computer system, either through individuals who have a hand in designing the system or via the society the software was created in; Technical bias, where technical considerations bring about bias (from limitations, loss of context in algorithms, random number generation, or formalization of human constructs); and Emergent bias, where bias emerges after design when real users interact with the system (for example, when new information is available but not in the design, or when systems are extended to new user groups). A number of illustrative examples are given, and the authors look at a number of specific software systems and point out existing or potential biases. One of the systems is the The National Resident Match Program (NRMP), used to match med school graduates to hospitals. In this system, if a student’s first choice of hospital and hospital’s first choice of student do not match, the students’ second choices are run against the hospitals’ first choices. Overall, the result favors the hospitals. Two steps are proposed to rectify bias – diagnosis and active minimization of bias.

This is an extremely interesting subject, and and I doubt most users and programmers are any more aware of it now than they were in 1996. One more recent article, (http://web.mit.edu/21w.780/Materials/douglasall.html) which sought to turn literary criticism toward video games by pointing out cultural biases, also mentions the lack of study in this area. With so many people spending so much of their day interacting with software, why do these kinds of articles seem so few and far between? On the other hand, the particular examples chosen are illustrative but not very current. All three of the systems were large-scale, mainframe-type software that users interacted with in a very small sense. Would the risk of bias be even greater for a system which is largely a user interface?

One clear implication is shown in the diagnosis stage of removing bias—to find technical and emergent bias, designers are told to imagine the systems as they will actually be used and as additional user groups adopt them, respectively. So the charge is one-third ‘know thyself’ and two-thirds ‘know the users.’ The very notion of looking for bias is probably foreign to many user interface designers (in fact, few of the programmers I’ve met are even aware that accessibility guidelines exist for blind, deaf, and other users). The authors’ proposal that professional groups offer support to those designers who detect bias and wish to fight it is a nice thought but doubtful. Few programming or UI organizations can exert any kind of pressure or drum up much bad publicity, or if they can, I haven’t heard of it (which I suppose means they can’t).