Anti-Buzz: Modeling the World

by Andrew Emmott on August 16, 2014

in Anti-Buzz,Future Tech

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

The Buzz: You can’t beat human intuition.

The Other-Buzz: You can’t beat the hard truth of math.

“The plural of anecdote is not data” – an aphorism of somewhat uncertain origin. I will still begin anecdotally: when I was much younger, there was a joke about “them”. They told you eggs were bad for breakfast, (cholesterol), then they told you they were good, (protein), then they shocked you by telling you it was ideal to be 20 pounds overweight, (they later told you this was not actually true). They told you what the world population would be in 2050, the precise moment we’d run out of room for our carbon footprint, (or whatever), the probability that we’ll die of a heart attack, the probability that we’ll die in a car accident, and the log-likelihood of the Chicago Cubs winning a game on a windy Tuesday against a left-handed pitcher.

“You know, they say you shouldn’t swim after eating.” “You know, they say Shakespeare didn’t write all that stuff after all.” It seemed we were always talking about them. “Who are ‘they’, anyway?” That was the joke. There were a few decades when we were all just connected enough to be innundated with the statistical analysis of the world, but not connected enough that we could be gaurded against it by snarky basement bloggers. “They” aren’t quite so loud anymore because we’re all much louder. Or I’m just older. Personal anecdote after all. In my day we had to go uphill both ways through nutritional studies and actuary tables.

I’m becoming one of them. Yes, one of “them.” The they who tell you, with absolute uncertainty, what the world is really like and how it works.

I planted a seed last week: the machine learning idea of “loss function”; but I would be myopic to think it was something special to my field. In business, economics, or even just poker, we talk about risk versus reward. In ethics we talk about utility. All of us make decisions, and all of us do our best to make the right ones. Somewhere in our heads is a line, and when it gets crossed we make decision A, when it doesn’t we make decision B. Without meaning to, we quantify everything. When something becomes worth enough, (or worthless enough), our decisions will change.

Human intuition can often trump cold math; and sometimes a rigid mathematical policy trumps human bias.

As we automate more and more of our world, this is the tension that needs resolving. The human brain has on its side speed and adaptability and this unquantifiable capacity for good intuition. Machines have on their side a lack of emotional bias. When humans fail, its because they make a decision upon which the correctness of the decision implies a certain world order. If you obsessively recycle, you make that decision because you believe it is correct, and its correctness would suggest something about the nature of the world that you want to believe is true. So, despite your intelligence advantage, your opinions and worldview can be the source of poor decisions.

The machine learner is not so encumbered and won’t make poor decisions based on what party it votes for; but lacking the ability to form a ‘worldview’ is also what keeps it from becoming as adaptable as a human; the machine will just live with the assumptions it is given, and do the best it can. Of course, a machine can still be biased toward a worldview provided by its creator, but that is precisely why we turn to mathematical and statistical models: because they are easier to examine and justify as unbiased. In practice, machine models of the world tend to suffer from oversimplicity, not their creator’s bias.

Both of these tendencies are obstacles in our near future. Last week I spoke of the political hurdles the driverless car will face. Imagine if computer scientists developed a system that computed the best use of public funds. Or the correct way to fight forest fires. Or the best way to use troops in a war. You can imagine the reaction politicians would have if the results ran counter to their policies, or the reaction the public would have if too many fires were left to burn, or too many troops sent to die.

The question becomes, are we too emotionally attached to certain ideals that we mistrust the computer, or is the computer failing to understand the complexities of the world in a way only we can? I believe learning how to ask this question will be critical to the 21st century. Artificial Intelligence isn’t about making fake people – that’s science fiction – it’s about automating decision-making in areas traditionally only trusted to humans. This has the potential to relieve us from tedious tasks, (driving, assembly line work), but also has the potential to inform high-intelligence policy decisions, (diagnostic software, for example), free from emotional bias. The latter requires trust from people toward machines, and also improvements in our ability to model complex concepts.

The revolution of comfortable, easy to use hardware and software is going a long way to developing the trust. The complexity is still up for grabs.

by: at .

Share

Comments on this entry are closed.

Previous post:

Next post: