Anti-Buzz: Communications Breakdown

by Andrew Emmott on October 18, 2014

in Anti-Buzz,General

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

Why do computers lie to us? Why don’t they always listen to us? Well, they don’t lie to us exactly, and they can’t really ignore us, but given that we are prone to take everything they do so personally, it feels like lying. It’s hard not to feel slighted when communication breaks down between you and your electronic vessel. Is it reasonable to feel so personally invested in our computing? I say yes.

I used to classify my time playing video games as time playing solitaire. I was, after all, alone. But in sense you could argue that, at the very least, I was competing against the design talents of the people who made the game. The best gaming experiences are the ones where you could feel some sort of implicit dialogue between yourself and the game designer. The same is true of reading books; you can call it solitary, but you are in some ways conversing with the author, provided they have suitably engaged your mind.

The same is true with computing. I spend a lot of time trying to demystify the apparent “intelligence” of computers and praise the real intelligence of humans, but I am admittidely swamped in these ideas thanks to my coursework and research. The truth is, for most of the time for most of the people, computing is more like reading a book or going to see a play; there is an implicit communication between the user and the creator. The inability of the computer to “know what you want” is, yes, a function of its non-existent intelligence, but it is also sometimes a failure of the engineer behind the software. Of course, trying to write software that works for everybody is like trying to write a novel that pleases everyone; the best you can achieve is popularity.

Given that personal communication is so integral to everything we do, (increasingly so now that it is become easier and easier to manage), I think we can learn a few things from the communications breakdowns we face with computers everyday.


Ambiguity and Trust

I think the only appropriate response to the preceding dialog box is “Help.” This is a cherry picked example, the result of my plumbing Google image serach for something suitably obnoxious, but dialog boxes are often ground zero for communications failures in computing. Worse is that user-studies show that most people nowadays just click-through these messages; and fair enough I say. This simple and effective way to prompt the user has been killed by overuse. It used to be that my most common computing advice to people would in fact be to click-through all these things. Neophyte computer users used to be so intimidated by the plethora of obtuse prompts that offered little in the way of choice or information that the only way to get people over the hump of tecnophobia was to encourage them to ignore all prompts.

Things aren’t as bad now, but we still run into the occasional choice between “Yes” and “Okay”, or “Yes” “No” “Cancel” or even a straightforward “Yes” and “No” but no question is asked.

What we can learn:

  • Don’t provide irrelevant information or prompt too often – you only train people to trivialize what you have to say and lose trust in your ability to communicate.
  • Offer clear options, and don’t offer too many – too many choices either obscures what is important, or makes people think you don’t care what happens.
  • If you have a question to ask, remember to actually ask the question – (People make this mistake more than they would want to admit – and then get frustrated when their concerns go un acknowledged).
  • Don’t require an immediate answer if it is too disruptive. (Computers are still bad at this).

Time Estimation

You would think computers would be better at this: estimating the time. We’re certainly bad at it, (or at least some of us are), trying to cram too much into one day, or not enough into another, showing up too late or too early. The world, despite it’s efficiencies, is full of these tiny mistakes. Your download will take 3 hours to complete, then 5 minutes, then 32 minutes, all in the course of one real minute. It seems a mechanical process: measure something, add the somethings together, combine them into an agenda – so why are computers as iffy as we are about this? The exact details aren’t important, but suffice it to say that if computers could ever know exactly how long something would take, they would get a lot more done than they already do. The same is true of ourselves, if we always made these guesses correctly, we would spend our time more wisely and get more done. However, time estimation is also about communication. The world is full of collaborators, and they all have to know how long the other is going to take. Shaky time estimates from computers might be the stuff of jokes for us, but they are still a crafted part of the user experience.

What we can learn:

  • Stay in communication when time estimates change. (But don’t over do it).
  • Err on the side of overestimation. Apart from the old trick of playing with expectations, it is better to accidentally have too much time then accidentally have too little. Overestimation is the better mistake to make.


It is amazing how consistent computers are, given that software developers can’t even agree on the best route out of a burning building. Imagine if you opened an application and the scroll bar was on the left, the window-close check box was moved to a bottom corner, and the “Edit” menu came before “File”. You aren’t stupid and you aren’t unadaptable, but that application would always feel difficult to use, despite no particularly bad design decisions.

What we can learn:

  • Communication is improved with consistency. Expectations speed up communication and understanding with less information.
  • Deviating from standard expectations has a cost, (But is sometimes worth it).

by: at .


Comments on this entry are closed.

Previous post:

Next post: