Ann and I ended up in the news again today, this time in a New York Post article about Twitter. I used Twitter to send out updates on what was going on during labor. I’m probably not the first to do it, but it’s an interesting use case for an article like this, aimed at introducing some new tech that’s been popping up more and more in popular culture.
But when I noticed the accompanying photoillustration I had to post this screenshot. Notice anything a little off?
Click here to see the full-sized screenshot. The caption text seems to imply that it’s a photo of Ann. For those of you who don’t know us personally, Ann’s not actually a white person like miss stock photo here.
I’m not going to turn this into a full-time political blog, but I just spent the evening researching local issues and candidates and a thought occurred to me – does anyone test the usability and the user experience of the democratic process?
There’s a number of different ways to approach this question. The usability of voting systems is a big part of it, and in the case of electronic voting machines, this would be identical to traditional usability testing. I’m going to put that question aside for now since I haven’t studied it very closely and talk about the information seeking portion of the electoral user experience.
Also, I apologize in advance for making this post very U.S.-centric. Please comment below on how these issues apply in your country.
Political information seeking
We are completely inundated with information and misinformation about the major candidates for national office, from a wide variety of communication media. Everything from dinner-table conversations and door-to-door canvassing to cable news, candidate web sites, and political blogs can influence how we vote.
The Cleveland Plain Dealer ran a story today about the origin of guns used in crimes in the city. This is an issue that people are concerned about and it deserves coverage. Rather than present information about gun laws in various states and numbers of crime gun recovered as a boring list, the PD provided a helpful infographic.
Maps and bar charts can be really useful tools to help people make sense of information. But look closely and you’ll see a problem – the bar chart showing the relative number of crime guns recovered is wrong:
At first glance it looks like about as many guns are recovered from Cleveland as from Cincinatti and Columbus. But the Cleveland number is really about 65% of the Columbus number.
This is probably just an error, akin to misspelling someone’s name in an article. But it’s a good example of a bad graphic, sometimes called chartjunk. Ignoring the error, a bar chart like this might conceal more than it conveys. The top three cities are much larger than the rest, so wouldn’t we expect them to have more guns seized? Maybe a measure per 1000 persons would be better. We also need to think about what this chart implies to readers – is a higher number worse, because it correlates to more crimes, or better, because it means police departments are doing a good job of taking guns off the streets?
If you’re interested in reading more about how to design good graphics and communicate large amounts of data effectively, take a look at the books of Edward Tufte.