Tag Archives: ethics

advertising anonymity bias data portability Facebook journalism linkedin mass media objectivity Online News opinion PR public opinion Reporting social networking social software XML

Corporate Fan Pages: When you come to a conversation, have something to say

Logo altered in protest of environmental damageSalon had an interesting post about some trouble Nestle ran into on their Facebook fan page. You can read more there, but here’s the gist: environmental groups are accusing Nestle of driving rainforest destruction through their purchase of palm oil. They buy palm oil from Indonesia, where enough forest is being cleared to threaten orangutans with extinction. Nestle has a fan page on Facebook, and orangutan lovers started posting complaints on it.

Shortly thereafter, the moderator posted:

To repeat: we welcome your comments, but please don’t post using an altered version of any of our logos as your profile pic — they will be deleted.

If you know anything about the internet, then you know that this message was the worst possible thing Nestle could have posted. It’s the Streisand Effect – if you try to hide something on the internet, it suddenly becomes a lot more interesting, and you only draw more attention to it. This is so basic to the sociology of the web that if I were hiring someone to do social media work or PR, that would be the first question in the interview.

The Salon article catalogues some interesting exchanges between the Nestle admin and Facebook users, culminating in this announcement:

Nestle: This (deleting logos) was one in a series of mistakes for which I would like to apologise. And for being rude. We’ve stopped deleting posts, and I have stopped being rude.

A trip to the fan page now shows nothing but altered logos and calls for boycott. The Salon piece concludes that the real shame of this whole exchange is that the admin acted like a human being, actually talked to people, and is probably in big trouble for it – and if not, Nestle will be less likely to do anything like this in the future, retreating to boring press releases and spokespeople.

I think the real lesson to be learned here is that when you show up to a conversation, you actually need to have something to say.

Nestle is trying to take advantage of the fact that there’s a lot of people out there who really like their milk chocolate, or really enjoy KitKat bars. They’re using social networking sites to encourage people to talk about chocolate and KitKat bars, remember how much they like them, and hopefully buy more. This all makes sense and is a lot more engaging and cost effective than TV ads and the like. But once you start people talking, you cannot control what they are going to say. That’s not how conversations work, even conversations attenuated into new formats like Facebook wall posts.

So no people are accusing you of hating cute orangutans, what do you do? You need to be able to say something:

  • We didn’t know, this is what we’re doing to fix this.
  • This isn’t true, here’s why.
  • There’s no other suppliers, but here’s what we’re working on to substitute or work around the problem.

Hell, if you think you can get away with it without losing more customers, even saying “Who cares about monkeys, we gots to have our delicious sugary snacks!” is better than saying nothing or trying to edit the conversation in progress. Having some kind of ethics really matters here.

But if you can’t say any of these things… well, just shut everything down. Stop trying to build equity in your brand and concentrate on making the cheapest candy because your company obviously doesn’t understand the point of building a brand or cultivating passionate customers.

The Ethics of Web Apps, or, Ever try to get a list of your contacts from Facebook?

Jagged path Even before I worked at Google, I was pretty impressed by the “don’t be evil” motto.  Not that I think any company is perfect or that anyone can hire only saintly employees – but it’s impressive when anyone recognizes the ethical implications for what we do as programmers and web developers.

Now that I work there, I can tell you that everyone really seems to take it to heart (disclaimer:  this is my personal blog and I am not representing my employer in any way).  At this point, you may be asking, “programs are just lists of instructions, web sites are just products, what’s the ethical dilemma?”

I’ll give you an example.

I’m a big fan of Facebook, I think they’ve really done a great job building a social networking system, and it’s been very useful for keeping up with friends all over the world.  But I also have an account at LinkedIn, and Flickr, and Yelp, and an address book in Thunderbird, and another on my iPhone, and…  you get the picture.  So I’m trying to collect all my contacts together in one system (Gmail) so I can just import/export to keep all these different social networking systems up to date.

But Facebook doesn’t have a function to export a list of contacts and email addresses.  What’s more, they’ve apparently actively blocked attempts by developers to build systems to do it and disabled people’s accounts.

They are, of course, not legally obligated to let you export your contacts.  And if I were building a social networking site, it probably wouldn’t be the first feature I would implement.  But ethically, I think, they should do so.  Why?  We can refer to Kant’s categorical imperative or Jesus’ golden rule:  They should build open systems because they would like other systems to be open.

They certainly take advantage of the openness of other systems, allowing you to import contacts from Gmail.  Google’s social networking site, Orkut, will happily export your contacts, and I don’t think that’s an accident.  The engineers and product managers at Google make conscious choices to do the right thing.

But wait…  am I really asking them to make it easy for their users to take their data and go over to a competitor?  Isn’t that a bad business practice?

It’s possible, but beside the point.  I’m sure you and I could think of plenty of things that are profitable but morally repugnant.  What’s more, I don’t think it is a bad business practice at all.  I think that the walled garden approach is a sign of desperation rather than innovation.  Orkut is not the only one that lets you take your data with you – LinkedIn allows exports, for example.

Paul Graham wrote a really interesting post about this recently:

When you’re small, you can’t bully customers, so you have to charm them. Whereas when you’re big you can maltreat them at will, and you tend to, because it’s easier than satisfying them. You grow big by being nice, but you can stay big by being mean.

If you’d like to read more about this subject and see what some developers are doing to make your data more portable, check out DataPortability.org.

Notes: Bias in computer systems

Friedman, B., & Nissanbaum, H.  (1996). Bias in computer systems.  ACM Transactions on Information Systems, 14(3), 330-347.

 

In this article Friedman and Nissenbaum look at bias in software systems. Although the word bias can cover a number of related concepts, the definition used here is something that systematically and unfairly discriminates toward one party or against another. The authors see three main classes of bias in computer systems: Preexisting bias, when an external bias is incorporated into a computer system, either through individuals who have a hand in designing the system or via the society the software was created in; Technical bias, where technical considerations bring about bias (from limitations, loss of context in algorithms, random number generation, or formalization of human constructs); and Emergent bias, where bias emerges after design when real users interact with the system (for example, when new information is available but not in the design, or when systems are extended to new user groups). A number of illustrative examples are given, and the authors look at a number of specific software systems and point out existing or potential biases. One of the systems is the The National Resident Match Program (NRMP), used to match med school graduates to hospitals. In this system, if a student’s first choice of hospital and hospital’s first choice of student do not match, the students’ second choices are run against the hospitals’ first choices. Overall, the result favors the hospitals. Two steps are proposed to rectify bias – diagnosis and active minimization of bias.

This is an extremely interesting subject, and and I doubt most users and programmers are any more aware of it now than they were in 1996. One more recent article, (http://web.mit.edu/21w.780/Materials/douglasall.html) which sought to turn literary criticism toward video games by pointing out cultural biases, also mentions the lack of study in this area. With so many people spending so much of their day interacting with software, why do these kinds of articles seem so few and far between? On the other hand, the particular examples chosen are illustrative but not very current. All three of the systems were large-scale, mainframe-type software that users interacted with in a very small sense. Would the risk of bias be even greater for a system which is largely a user interface?

One clear implication is shown in the diagnosis stage of removing bias—to find technical and emergent bias, designers are told to imagine the systems as they will actually be used and as additional user groups adopt them, respectively. So the charge is one-third ‘know thyself’ and two-thirds ‘know the users.’ The very notion of looking for bias is probably foreign to many user interface designers (in fact, few of the programmers I’ve met are even aware that accessibility guidelines exist for blind, deaf, and other users). The authors’ proposal that professional groups offer support to those designers who detect bias and wish to fight it is a nice thought but doubtful. Few programming or UI organizations can exert any kind of pressure or drum up much bad publicity, or if they can, I haven’t heard of it (which I suppose means they can’t).