I recently listened to a talk about machine learning, it began with the broad argument that our attempts to create sustainable environments had failed. By the speaker’s own admission, “sustainable” was a buzzword that he was almost embarrassed to use (a nice parallel to my own focus here at “Antibuzz”).
Sustainable is a classic buzzword in the sense that it has long left its home domain and long since broadened beyond its original meaning. The speaker clarified his point by mentioning “ecological surprise” and arguing that humans have failed to anticipate changes in ecologies, despite our great political and scientific concern over the matter.
An easy example would be Australia and its now infamous chain-reaction-environmentalism wherein the solution to an invading species is to introduce another invading species, causing more havoc than the smokestacks of armageddon could ever hope to. More subtle versions of this would be salmon hatcheries inadvertently damaging the salmon population by, if you’ll pardon the over-simplification, introducing stupid spoiled brat fish into the population who had no clue how to survive in the real world. Less controversial instances include the simple inability to predict large sweeping changes, where strange or dramatic things happen to certain environment independent of any human activity and we have no idea why or why we failed to expect it.
The broad point was that we needed to do better at modeling the environment, “and let me tell you how computer science can help …”, and so off we went down the real trajectory of the speech. It’s not just a matter of political neutrality that I would tip-toe around the concepts of environmentalism – it’s not really my specialty, and as someone who has a specialty of my own I’ve come to learn exactly how annoying outsiders-who-think-they-know-better can be. But at the very least we can imagine a very simple political model that works like so:
- Politician or interest group believes that we need to improve Environment A because it is in bad shape.
- Funds are secured for the purposes of improving Environment A.
- Environment A does not improve.
- More funds are secured.
- Return to step 3 and repeat.
And while that is a crass and cynical model, it’s the sort of nightmare scenario that the speaker was so concerned with understanding. To quote the modern proverb, “Work smarter, not harder.”
Stepping away from something as politically charged as environmentalism, the above model is really just the outline for thick-headed problem-solving, where and individual or organization is driven by the belief that more “money” or “effort” is the only requirement for victory. It’s the boss who is worried about general productivity and so just marches around barking for everyone to “work harder” as if this was itself a solution. It’s “working harder and dumber.”
The buzzword for this is linear thinking. (“Non-linear” or “lateral” thinking is itself a buzzword for plain cleverness, but in computer science “linear” often refers to a specific algorithm complexity.) The best everyday explanation for linear problems is that the amount of work they require scales directly with the size of the problem. Twice as much grass to mow takes twice as much time. That’s a linear problem. Most wasteful boondoggles are forged on the assumption that they are dealing with a linear problem when they are not.
Imagine I had tasked you with sorting one deck of cards per day. And let’s say your solution was to simply take the cards and shuffle them, check if they were in sorted order, and then repeat shuffling and checking as needed until the cards were sorted. This solution works in the sense that eventually the cards will be sorted. If I gave you a small number of cards, say just 4 or 5, this solution wouldn’t be particularly awful. You would easily make your deck-a-day quota. But then one day I send you a bigger deck, then a bigger deck. It won’t take long before the obvious stupidity of the shuffle-until-sorted-algorithm becomes apparent. The solution isn’t to “shuffle harder” but to reexamine the problem and come up with something better.
Of course, recognizing when you just need to keep mowing grass and when you need to stop shuffling cards is not as easy as it is in my simple analogies. However the analogy does describe certain complex computer science questions. The lay-assumption about computers is that all problems have linear solutions; as if the key to cleaner language translation or better speech recognition or more accurate driving routes from your GPS is just more RAM and faster processors, (surprise, it’s not). Linear thinking extends beyond computers and into the problems you face every day.
Don’t waste too much time shuffling.