Anti-Buzz: 2+2

by Andrew Emmott on June 12, 2012

in Anti-Buzz

The Buzz: If you can solve a problem once, you can solve it again.

The Anti-Buzz: 2+2 does not equal 2+3.

Continuing the conversation we started last week. Explaining some of the broad concepts in Computer Science is tricky, there is a lot of intuition involved. This leads to the danger of either jumping too far ahead into abstract concepts, or going too slow and coming across as condescending.

Last time I gave a basic description of algorithmic thinking: the need to precisely describe some process in order to get something done. This week I’ll make an important connection: the value of generalization.

As sort of a pre-Father’s Day gift, I’m going to share an anecdote from my childhood about a lesson learned from my father, (ie: the guy who runs this site).  I was  very eager to learn math as a kid, even before starting school. I think as a means to keep me occupied and well-behaved, my father would give me some addition problems at about the difficulty of say, 1234+567. I had learned to line them up properly, and carry the one when necessary, (I think this part always had me excited). So I’d eat up some problems and then, to make sure I wouldn’t ruin some friend’s wedding or whatever, he’d give me more.

Eventually I got it in my head that I would stump my father at this addition game. One day I wrote down two very large numbers and challenged him to add them. Naturally this was an easy task. He lined them up just like he had shown me and carried the one as many times as he needed to, and that was that. I was too young to remember how he explained it to me, but I remember the gist of the lesson: Once you knew how to add, it would work for all numbers, always. Big numbers just take longer to compute, but they aren’t any harder. As a process, the way I had learned to do addition was good enough to add anything, and that was when I understood the difference between knowing how to add, and just knowing what 2+2 is.

Coming up with clear instructions is one thing, but coming up with instructions that work for all cases is another, and that’s the paradigm computers run on.

 In the last column I suggested that if you wished a computer could something, but you couldn’t come up with a set of instructions that would enable it to do so, then perhaps your desired feature was not realistic.

As a small example, consider text auto-completion on smartphones. Whenever auto-completion doesn’t work right, we all wish the computer could do better at “knowing what we meant”, but this is no easy problem to solve. However, if you are enterprising enough, I’m sure you could write very clear instructions on what the computer should do when you start typing certain letters. In other words, taking my challenge from last week, you could precisely describe the process by which the computer could auto-complete your words better than it does now.

The problem is that your solution wouldn’t generalize. You would have written instructions on how your phone can read your mind, but you have not solved the problem of “how can all phones everywhere correctly guess auto-completions?” You have told the computer the answer to 2+2, but you haven’t taught it how to add.

Additionally, a set of instructions can often be decomposed into smaller sets of instructions. Going back to the peanut butter sandwich example from last week, the software engineer in me would begin with the highest level description:

1. Cut two slices of bread from the loaf.

2. Open the jar of peanut butter.

3. Spread some peanut butter on one face of one of the slices.

4. Put the slices together, with the peanut butter facing in.

The idea here is that I can take any one of these parts and give more description. Step 1, for example, is a 2+2 instance of a greater problem. Is it more useful to describe how to cut two slices from a particular loaf, or perhaps might I want to describe how to cut X slices from Y? If I can write general slice-cutting instructions, with variable inputs, (number of slices, what you are slicing, what knife you are using), then I have a process I can use elsewhere. By writing the peanut butter sandwich instructions, I have also partially written the instructions for serving cake.

Similarly, spreading something onto something else is another subprocess that can be used again; it would be easy to upgrade to a peanut butter and jelly sandwich if I already have the notion of “spreading” well explained.

I need to stop here, but like last week I hope that I’ve provided more insight into the difficulty of designing software, and why computing technology is sometimes less convenient, intuitive or intelligent than we want it to be. Additionally, the notions of generality and re-usability are important beyond computers; as someone who sets policy, procures equipment, and hires and trains employees, you can embrace these values in business as well.

by: at .

Share

Comments on this entry are closed.

Previous post:

Next post: