Next week I’m planning a year-end review where I evaluate my own predictions for the year and come to the disappointing realization that I didn’t make any. (Note for next year: embarrass yourself with inaccurate predictions). As we stand, I have one more topic for you to round out the year, but it harks back to arguments I made almost a year ago [when I was suggesting working definitions of innovation and intuition.] This week’s topic is rules.
Imagine your car is broken, but you want to fix it yourself. You don’t know how, so you ask a mechanic to show you. Let’s say this experiment has two outcomes. One mechanic discovers the problem, and then gives you detailed instructions on how to fix it: Take this wrench, put it here, turn it this way, grab that thing, move it there, and so on. The other mechanic takes a bit longer and has the irritating 3rd-grade-teacher habit of making you work for knowledge. This is your engine. This is a piston. This is a wrench. Can you guess what happens when you put it here and turn it this way? The solution isn’t direct, but you learn a lot about your car along the way. If another problem cropped up, you might be able to fix that one on your own.
Both of the above are sets of rules, but suffice to say that one of them is better than the other.
Those many months ago, when I spoke about intuition, I praised the human brain for being able to quickly make intelligent decisions – a mysterious “intuition” quality that we have failed to harness in artificial intelligence. Yes, we’re imperfect and we make bad decisions, but in general we are able to do the right thing, or at least the “good enough” thing. I said before that we can do this because we are able to follow “rules of thumb” that “work most of the time” but this leaves out an important detail: that we are still smart enough to solve problems we haven’t seen before. It is the rules of thumb that enable this as well – the rules we live by are lightweight enough that we can adapt to new situations; (or at least they should be – you will find that stubborn people live by stubborn rules).
So, when I want to discuss rules, I want to think of them in terms of this trade off: “How often do they do the right thing?” versus “How easy are they to adapt away from?” Like last week, this is yet another broad lesson from AI research that seems applicable to business policy and life in general. In someways, what I’m getting at here is very much the next step in working “smarter not harder.” What policies can you set in place that will be correct most of the time, but won’t bog you down in red tape or fail in the face of change? This question is explicitly addressed in AI problems – finding narrow solutions is called overfitting, because you have fit too closely to the original problem, with no flexibilty left to solve new ones. Overfitting can happen in business as well.
When setting policy, there are a number of natural impulses. One is to be perfect – that is, to cover all cases. The other is to be absolute – to set rules that have no exceptions. These impulses are in opposition, but both can lead you astray.
The impulse of perfection is the harbinger of bureaucracy. A ten-page manifesto on appropriate Internet usage in your office is probably more effort than the issue deserved. You are not in the business of moderating Internet usage. If you want to exude a consistent ethical character, the minutia of an overwrought policy is a poor way to do it. You stand for values, not tedious rules. Ornate policy is like trying to compute the optimal move in chess – you are just wasting valuable time.
On the other side, however, is the cold rigidity of absolutism. Say that eight years ago your office was plagued with employees who constantly checked their personal e-mail. In one simple stroke, you solve the problem with the elegant, if short-sighted, policy “Personal e-mail is not allowed.” The problem goes away. You feel good, and two years later your employees are frittering away man hours on MySpace. “No personal e-mail” you say, only to be met with “This isn’t e-mail.” Your solution was effective, but too narrow. I can imagine a similarly misguided boss making a “No downloading” policy in the late 90’s.
So the “trick” is to find the middle ground. You want to set rules that cast a wide net, but you don’t want them to be over complicated. But leaving you with that simple platitude is unhelpful. Believe it or not, a common roadblock to elegant policy is a lack of understood goals. To use the AI analogy, we know a chess computer seeks to maximize its power while minimizing the opponent’s, but what is less clear is what does that even mean? “Win” is not a long-term strategy. You can’t come up with good rules if you don’t even know what it is you are trying to minimize or maximize. In chess, this value is often material, (Trade your pieces for their better pieces, otherwise evade capture). In plain English I call this step Figuring Out What’s Important. In our office example, your true goal isn’t to regulate Internet use, is it? Or is it something larger and more abstract? (Maximize profits, maximize employee output, etc.) When you make policy, bear your true goals in mind.
I leave you with an anecdote about an AI competition I participated in. The game was similar to Risk, and the professor provided a simple AI that merely attacked whenever it had any numerical advantage at all. When designing AI for strategy games you are essentially faced with the same challenges of setting business policy. You can’t rely on chaos. You need rules to make smart decisions, but you don’t have the luxury of exhaustively computing the best move either. You need short simple rules that usually do the right thing. The irony was that many students labored extensively on their agents, only to find they they were consistently beat by the dumb thoughtless beast the professor had provided. The game, as it happened, favored aggression, and the simple rule of “always attack” was better than most other complicated policies. Funnier still was that a student simply entered the professor’s agent into the competition and took third place. Even after witnessing defeat, students refused to believe that the correct policy would be a simple one.