Anti-Buzz: Cats All the Way Down

by Andrew Emmott on July 31, 2012

in Anti-Buzz

The Buzz: GOOGLE CREATES GIANT BRAIN NETWORK THAT SCANS THE ENTIRE INTERNET AND LEARNS TO RECOGNIZE CATS

The Anti-Buzz: Not exactly

Or

Computers are Stupid

On June 15th an acquaintance shared this link. It’s a research paper on machine learning, a topic that I am very interested in, so I dove into it right away and enjoyed it. If you can muddle through the abstract you will stumble on a sentence that sounds familiar: “We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies.” (Emphasis mine).

About ten days later news stories such as these began to surface. Reputable journalistic outlets were not reporting the results of this paper; it was all about Google creating a “Neural Network” using thousands of computers to somehow create a “brain” that detects cats.

Last December I wrote something of a rant about the misrepresentation of science and statistics in media. I had read enough research papers to understand the disconnect between rigorous scientific exploration and how news outlets want to spin it. Now, thanks to this Google cat face story, we can see the phenomenon in action,

This is a great example because I read the research in question, and the research has basically zero political, economic, or social heft to it. I can talk all day about how the news misrepresented this piece of research and I have essentially no chance of rubbing anybody the wrong way. The worst I’m going to do is disenchant a few cat lovers.

First, I could enumerate all the real contributions provided by this paper, but that could be a bit boring, and I don’t have the space. Let this be the disclaimer that nothing I say below should indicate that this research isn’t valid, interesting and useful – which brings me to my first point, which is that the science doesn’t have to be bad or wrong in order for media coverage of it to be misleading and unhelpful.

Second, let’s not kid ourselves; the cat-face corollary is the only reason we’re here. The “news” isn’t Google’s contribution to Machine Learning, (which is too bad, because it’s a good contribution), but that out of Science’s dour face comes the amusing fact that somebody accidentally made a cat detector. The fact that this came from Google makes it more charming, but I think this research would have found its way onto NPR eventually anyway.

The important observation here, however, is that this science became newsworthy because it reinforced a preexisting belief: namely, our collective belief that the Internet is full of cats. Reinforcing preexisting beliefs is the bulwark of science-in-media. Try to remember this the next time you point at any “study” that seems to argue in your favor; the only reason you got to read about it in the first place is not that the study was valid, but that it would help someone somewhere win an argument.

Further, buzzwords are bad but when journalists get their hands on science words, watch out. Let’s tackle the big one: Neural Network. It sounds like we’re one step away from Skynet when you start throwing that thing around. It gets worse in this particular case because Google distributed their neural network over a,” large network of computers,” making this sound even more like crazy science-fiction.

The recent Battlestar Galactica series made a big stink about how networking lots of computers just automatically makes them more smarter and able to kill you. So to an outsider: Neural Network? Spread over an entire Google datacenter? Unless movies and TV have lied to you, we must be doomed. (Hint: movies and TV habitually lie to you).

What you will never hear about neural networks is that they’ve been around longer than my Dad, and have spent most of their history not really working. Most of the time, techniques with less sexy names have worked better, but you never hear about those because such news stories wouldn’t evoke notions of mad scientists sculpting an electronic brain and thus wouldn’t reinforce the preexisting belief that we’re all doomed.

Around 2006 there were breakthroughs that finally got neural networks doing the sort of things we always wished they could, and so there has been sort of a gold rush to prove things about them. Six years later Google makes the world’s first accidental cat detector and now suddenly we have news.

So the next point to note is that there is an awful lot of science that you never hear about, and when something novel like this happens news outlets have to make the story short and easy to understand so they take on the impossible task of summarizing years or even decades of theory and research and usually attribute all of it to whoever just earned the spotlight. It sounds like Google invented deep learning, unsupervised learning, or even neural networks in general, which is neither true nor indicative of the real positive impact of the paper. Nuance and accuracy take a back seat to digestible metaphors. By the time it’s a news story, it is disconnected from any real scientific debate – so again be wary of “proving” your points with science you read about in a magazine.

Lastly, while the results are positive, interesting, and even a bit cute, they offer me the chance to restate my mantra – which is that computers are stupid. When you were a small child it didn’t take you 10 million pictures and 3 sleepless nights to figure out what a kitty was, and even the most energetic of toddlers requires significantly less electricity than a Google datacenter. Newborn humans learn to recognize faces within an hour of birth. It amazes me that the stock reaction to AI news is that the machines are on the verge of some takeover, when it is clear that the human brain is still immensely more efficient than a computer. I tire of doomsday prophets because they never appreciate how smart we all are.

by: at .

Share

Comments on this entry are closed.

Previous post:

Next post: