Tag Archives: Programming

bias business rules computer science ethics Google human factors interview questions lateral thinking logic puzzles programming career social software vague requirements

The most difficult problems you will ever face as a programmer

Niagara Falls and city lights at night I was given a problem to solve at work earlier this week and I pretty much totally choked.  To be honest it wasn’t that hard of a problem – I obviously can’t share it with you here, but I will say that (among other things) I completely, totally blanked on how to find if two lines on a plain intersect and didn’t have a laptop handy to look it up.

This bothered me all week and got me thinking about my career as a programmer and the kinds of problems I’ve been asked to solve.  Everything we do as programmers, developers, or software engineers boils down to solving problems–so what have I been doing all these years?  Finally I realized that of all the difficult problems I’ve worked on in my professional career, most of them were difficult because of:

  • Imposed constraints;
  • Convoluted business rules and vague requirements;
  • Political or organizational issues; or
  • Human factors.

That last type of problem I actually really enjoy working on, but let’s put that aside for the moment.  Notice anything missing from that list?  Only rarely have I encountered problems that required really complex logic, difficult algorithms, or lateral thinking.

Why is this?  Have I shied away from those sorts of problems, or been unable to hack it?  I don’t think this is the case.  I did well enough on the SAT and GRE, and I can usually get myself back up to speed for solving logic puzzles in a week or two.  My guess is that my career is pretty typical, and that most of the problems that most companies face are due to constraints, vague business rules, organizational issues, and human factors.

This flies in the face of the kind of education most of us get as programmers.  At OWU the computer science department always erred on the side of math – we spent more time on concepts than practical applications.  I really, really value the kind of coursework I had in college but when it comes down to it, I learned just two things that I use on a regular basis:

  • Basic concepts and common programming paradigms; and
  • How to learn new languages, programming paradigms, etc.

I really enjoyed discrete math, but have rarely needed all the combinatorics.  Hacking scheme in my AI class was very cool but that’s the last time I’ve done any alpha-beta pruning.  I have successfully solved problems with some relatively mundane insights:

  • Don’t rely on memory, take notes and find references.
  • Look for low-hanging fruit.  Does the database even have indexes?  Do you really need to debug 2,000 lines of Javascript that essentially reimplement the concept of linking?
  • If you ever have a technical quandry, you’re probably not the only on in the world with the same question.  Chances are one of those other people has already asked the question somewhere on the web, and with a little luck someone else has already posted the answer.
  • Don’t get involved in political struggles between teams and don’t play the blame game.  Be unerringly pleasent in contentious situations, and if someone agrees to something in a meeting follow up with and email or some kind of documentation.
  • Prototype and iterate, people tend to use vague terminology and don’t always want exactly what they think they want.

So, if you’re going to end up implementing shopping carts or interfaces between large internal systems most of your career, why bother with brain teasers and algorithm interview questions?  Does this mean all that fancy book learning should be thrown out the window?

No!  Of course not!  If you do, when a really juicey problem does come along you’ll choke like me.

I’ve come to the conclusion that I need to make a concerted effort to look for problems that are difficult not because I don’t have enough time to do them, or because the two teams involved hate each other, or because the business analyst said “X is always Y” when he meant X is usually Y.  My guess is I’ll be hit with some soon at work.

In the mean time, got any good logic puzzles?  Textbook problems?  Favorite websites?  Feel free to post them in the comments below to get me started.

Notes: Bias in computer systems

Friedman, B., & Nissanbaum, H.  (1996). Bias in computer systems.  ACM Transactions on Information Systems, 14(3), 330-347.

 

In this article Friedman and Nissenbaum look at bias in software systems. Although the word bias can cover a number of related concepts, the definition used here is something that systematically and unfairly discriminates toward one party or against another. The authors see three main classes of bias in computer systems: Preexisting bias, when an external bias is incorporated into a computer system, either through individuals who have a hand in designing the system or via the society the software was created in; Technical bias, where technical considerations bring about bias (from limitations, loss of context in algorithms, random number generation, or formalization of human constructs); and Emergent bias, where bias emerges after design when real users interact with the system (for example, when new information is available but not in the design, or when systems are extended to new user groups). A number of illustrative examples are given, and the authors look at a number of specific software systems and point out existing or potential biases. One of the systems is the The National Resident Match Program (NRMP), used to match med school graduates to hospitals. In this system, if a student’s first choice of hospital and hospital’s first choice of student do not match, the students’ second choices are run against the hospitals’ first choices. Overall, the result favors the hospitals. Two steps are proposed to rectify bias – diagnosis and active minimization of bias.

This is an extremely interesting subject, and and I doubt most users and programmers are any more aware of it now than they were in 1996. One more recent article, (http://web.mit.edu/21w.780/Materials/douglasall.html) which sought to turn literary criticism toward video games by pointing out cultural biases, also mentions the lack of study in this area. With so many people spending so much of their day interacting with software, why do these kinds of articles seem so few and far between? On the other hand, the particular examples chosen are illustrative but not very current. All three of the systems were large-scale, mainframe-type software that users interacted with in a very small sense. Would the risk of bias be even greater for a system which is largely a user interface?

One clear implication is shown in the diagnosis stage of removing bias—to find technical and emergent bias, designers are told to imagine the systems as they will actually be used and as additional user groups adopt them, respectively. So the charge is one-third ‘know thyself’ and two-thirds ‘know the users.’ The very notion of looking for bias is probably foreign to many user interface designers (in fact, few of the programmers I’ve met are even aware that accessibility guidelines exist for blind, deaf, and other users). The authors’ proposal that professional groups offer support to those designers who detect bias and wish to fight it is a nice thought but doubtful. Few programming or UI organizations can exert any kind of pressure or drum up much bad publicity, or if they can, I haven’t heard of it (which I suppose means they can’t).