Anti-Buzz Internet Social Media

Anti-Buzz: Public Journaling

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

Something very important: software metaphors.

Another piece of metaphorless software that you are likely familiar with, Facebook. Even with ‘book’ in its name, Facebook, and social media in general, don’t really look like anything you’ve ever used before. In fact, their ability to change their interface at will and still be Facebook is evidence that there is no real world template for what is happening online. You can see it as a completely new technology. Through the 20th century, nothing that resembled social media existed. We’re not talking about an electric typewriter or an information superhighway here, we’re talking about something for which there are no metaphors.

Most of what you read about the technology revolves around advice for leveraging it, (which is fine). Others are interested in trends. Not a lot of what you read tries to make literal sense of what the technology is. So, what, really, is social media?


Social Media is Metaphorless

What does that mean? First, it means that social media is completely new, not simply an old form of communication enhanced by the Internet’s connectivity. This, in turn, means that we don’t fully understand its potential, nor it’s dangers.

Second, the lack of metaphor contributes greatly to the sense of alienation it brings to some users, especially older ones. The generation gap between young Facebook addicts and their confused parents is in part created by the metaphorless interface. The same people who “get” email, word processing, and streaming television “don’t get” Facebook. The difference between an old technology revamped with modern enhancements and a genuinely new technology is huge.

Social Media is Public

Setting your “well, duh” aside, think for a moment how important it is for most people to feel like they are sharing an experience. One source of wedding planning agony for me and my fiance’ was, the choice of music during the reception. If your goal is to get everyone up and dancing, then good songs familiar to everyone are more valuable than better songs familiar to nobody. Familiarity is important. Sticking to music, many people just want to be aware of what the most popular music of the moment is, not because they even particularly like half of it, but because their familiarity with it will be shared with many other people. This familiarity produces talking points with both friends and strangers. Popular media in general maintains its popularity by promising a shared experience.

One of the biggest impacts of social media is that its persistent and public nature facilitates these sorts of comforting, shared experiences, but it does so without the expense of producing popular “lowest-common-denominator” entertainment to go along with it, (it has become abundantly clear that your average person can produce that sort of thing on their own). So, yes, obviously social media is very public, but the implications of this are not always taken seriously. If you’ve ever wondered about the difference between a Facebook addict and somebody who only uses it begrudgingly, you might consider it is not unlike the difference between somebody who listens to Top 40 Hits on the radio and somebody who could care less.

Social Media is Journaling

A common complaint I hear about social media is the quality of the content their friends produce. People who aren’t enchanted with the technology say things like, “I don’t need to know when my friends are at Starbuck’s” And it’s true; none of us need to know when anybody is at Starbuck’s. In my field I do occasionally come into contact with studies done on the content of social media, and if there is only one broad-stroked generalization to make, it is this: The vast majority of people who use social media spend the vast majority of their time journaling the mundane details of their day. The enormous bulk of all tweets are simply things like, “at work” “at lunch” “having a beer” and “thank goodness it’s friday.” All the minor goalposts of one’s day, summarized over and over again. As if I needed more validation of this, a friend of mine complained of the capriciousness of Facebook, saying he would post a nice photograph he took and get no response, but if he said he was eating lunch at some restaurant, he would get 30 ‘likes’.

Pooling the last two concepts together, what we get is that social media is really just a public journal. You write in your diary, but then you leave it on the coffee table and invite people to look through it. That’s social media. The catharsis of journaling, with the comfort and validation of sharing your experience. And yes, if neither of those things appeal to you, then the whole institution is going to look a little strange.

If I am to bring this back around to some practical advice: I was wrong so long ago. People will /totally/ become fans of their dentist. You might not generate a lot of traffic or interest simply by maintaining a Facebook page, but an active user will, when they come in for treatment, likely make a post or two about where they are. If they can link to you or your page, even better. If their friends kill five minutes by talking about dentists, you might earn a new patient. It costs you very little to simply make sure you have enough online presence to put an email address or phone number into someone’s hand. At minimum, social media allows you to maintain a magic billboard ad that will appear where ever people talk about you. Social media, and Internet connectivity in general, is about lowering barriers to information – so lowering the barriers to information about your business is a simple and easy extension.

Anti-Buzz General

Anti-Buzz: Communications Breakdown

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

Why do computers lie to us? Why don’t they always listen to us? Well, they don’t lie to us exactly, and they can’t really ignore us, but given that we are prone to take everything they do so personally, it feels like lying. It’s hard not to feel slighted when communication breaks down between you and your electronic vessel. Is it reasonable to feel so personally invested in our computing? I say yes.

I used to classify my time playing video games as time playing solitaire. I was, after all, alone. But in sense you could argue that, at the very least, I was competing against the design talents of the people who made the game. The best gaming experiences are the ones where you could feel some sort of implicit dialogue between yourself and the game designer. The same is true of reading books; you can call it solitary, but you are in some ways conversing with the author, provided they have suitably engaged your mind.

The same is true with computing. I spend a lot of time trying to demystify the apparent “intelligence” of computers and praise the real intelligence of humans, but I am admittidely swamped in these ideas thanks to my coursework and research. The truth is, for most of the time for most of the people, computing is more like reading a book or going to see a play; there is an implicit communication between the user and the creator. The inability of the computer to “know what you want” is, yes, a function of its non-existent intelligence, but it is also sometimes a failure of the engineer behind the software. Of course, trying to write software that works for everybody is like trying to write a novel that pleases everyone; the best you can achieve is popularity.

Given that personal communication is so integral to everything we do, (increasingly so now that it is become easier and easier to manage), I think we can learn a few things from the communications breakdowns we face with computers everyday.


Ambiguity and Trust

I think the only appropriate response to the preceding dialog box is “Help.” This is a cherry picked example, the result of my plumbing Google image serach for something suitably obnoxious, but dialog boxes are often ground zero for communications failures in computing. Worse is that user-studies show that most people nowadays just click-through these messages; and fair enough I say. This simple and effective way to prompt the user has been killed by overuse. It used to be that my most common computing advice to people would in fact be to click-through all these things. Neophyte computer users used to be so intimidated by the plethora of obtuse prompts that offered little in the way of choice or information that the only way to get people over the hump of tecnophobia was to encourage them to ignore all prompts.

Things aren’t as bad now, but we still run into the occasional choice between “Yes” and “Okay”, or “Yes” “No” “Cancel” or even a straightforward “Yes” and “No” but no question is asked.

What we can learn:

  • Don’t provide irrelevant information or prompt too often – you only train people to trivialize what you have to say and lose trust in your ability to communicate.
  • Offer clear options, and don’t offer too many – too many choices either obscures what is important, or makes people think you don’t care what happens.
  • If you have a question to ask, remember to actually ask the question – (People make this mistake more than they would want to admit – and then get frustrated when their concerns go un acknowledged).
  • Don’t require an immediate answer if it is too disruptive. (Computers are still bad at this).

Time Estimation

You would think computers would be better at this: estimating the time. We’re certainly bad at it, (or at least some of us are), trying to cram too much into one day, or not enough into another, showing up too late or too early. The world, despite it’s efficiencies, is full of these tiny mistakes. Your download will take 3 hours to complete, then 5 minutes, then 32 minutes, all in the course of one real minute. It seems a mechanical process: measure something, add the somethings together, combine them into an agenda – so why are computers as iffy as we are about this? The exact details aren’t important, but suffice it to say that if computers could ever know exactly how long something would take, they would get a lot more done than they already do. The same is true of ourselves, if we always made these guesses correctly, we would spend our time more wisely and get more done. However, time estimation is also about communication. The world is full of collaborators, and they all have to know how long the other is going to take. Shaky time estimates from computers might be the stuff of jokes for us, but they are still a crafted part of the user experience.

What we can learn:

  • Stay in communication when time estimates change. (But don’t over do it).
  • Err on the side of overestimation. Apart from the old trick of playing with expectations, it is better to accidentally have too much time then accidentally have too little. Overestimation is the better mistake to make.


It is amazing how consistent computers are, given that software developers can’t even agree on the best route out of a burning building. Imagine if you opened an application and the scroll bar was on the left, the window-close check box was moved to a bottom corner, and the “Edit” menu came before “File”. You aren’t stupid and you aren’t unadaptable, but that application would always feel difficult to use, despite no particularly bad design decisions.

What we can learn:

  • Communication is improved with consistency. Expectations speed up communication and understanding with less information.
  • Deviating from standard expectations has a cost, (But is sometimes worth it).
Anti-Buzz Internet Social Media

Anti-Buzz: You are the Cloud

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

The Buzz: A person is smart, but people are stupid.

I’ll openly admit that when I first came here I didn’t quite know what to tell you about social media. The past couple of years have seen business attitudes toward social media move from uncertainty, to frustration, to acceptance, if not complete understanding. Of course, there is a world of difference between what a 20-year old wants from Facebook and what AT&T wants. A simplistic view would be to say that social media represents public-private life, (in the sense that the interactions you have with friends in a theater or restaurant are public-private), and businesses should keep out, or at least not expect to fare better than a canvasser handing out pamphlets; but I think we already know this isn’t quite right either. I have an alternative view to share, given to me in a talk I saw last week: We are the cloud. Social media, or any other crowd-sourcing service that earns the participation of large numbers of strangers, is to people what the cloud is to computers.

So, that’s great. I’ve just explained a buzz word with a different buzz word, violating everything this column stands for. But no, seriously, let’s get after what I’m really trying to say.

Review: What is the cloud?

I don’t mean to talk your ear off about the cloud, at least not this week, but to cut away the generalizations inherent in buzz words, the driest, most straightforward explanation of cloud services is that they facilitate the deferral of computation. The cloud is about delegation and cooperation. If you need a lot of computational power now, and there’s a computer in the other room that isn’t doing anything, why isn’t it helping your computer get the job done? At it’s most ideal, the cloud is about infrastructure that let’s us all worry less and less about which computer is doing the work. It’s about leveraging their connectedness into cooperation, sometimes performing tasks better or faster than any single machine could.

So, now imagine that instead of talking about computers, I was talking about people.

The Internet is a Cloud of People

Forget the latest captioned cat photo or how many likes your practice is getting and instead consider the role played by social media in the Arab Spring, and the role it will play in any other oppressed part of the world. Consider how difficult it is to be anti-democratic in a world where everybody talks to each other, (talk being the most democratic institution of all). Admittedly, this is the heavy, dramatic, (and easy to understand), view. But understanding the impact of our connectedness begins with appreciating the power of putting mass communication in everyone’s hands.

Consider now crowd-sourcing efforts like Kickstarter, which have essentially made anything self-publishable. Like a good medicine show, there are hucksters and people who are otherwise undependable, but the story of crowdsourcing is mostly one of success. Products and projects that were not possible in the pre-Internet world are now becoming commonplace.

So, think of the cloud analogy again and remember that the ideal is to do things faster and better than any one of us could on our own.

Working in science, I am reminded of this all the time, as many efforts are helped along by citizen science – the practice of letting large numbers of amateurs gather data. Like the cloud is many processors, each with a small part of the whole problem, citizen science gives, say, every bird watcher in the world a small job to do.

As we have changed how we do things over the past decade – doing things “the Internet way” – a common concern has always been the caliber of the average person. The assumption was that blogging couldn’t challenge news media because it was composed by common people. The assumption was Wikipedia couldn’t work because there would not be enough interest, and it was too susceptible to vandalism, (and we all know how keen on vandalism your average joe is). The assumption was YouTube would never produce anything valuable, that the cacophony of new media would drown out good taste and that most regular folk were just not discerning enough to find and support the diamonds in the rough.

Important leaps in society are never founded on the belief that the common person is terrible.

What’s more, some research suggest that connected groups of amateurs, (“turkers” in this case, named for Amazon’s Mechanical Turk marketplace), can actually outperform experts. And if you are dubious of the idea, consider that Wikipedia is the greatest, broadest, most carefully edited encyclopedia in the history of the world, largely thanks to its disassociation with any governor-experts. Consider that twitter is much better at informing you about earthquakes than traditional media ever could be.

If you’re a cynic, then the confounding reality we are facing is that highly connected groups of strangers are, in aggregate, capable of more intelligence and productivity than our best individuals are. And where two decades ago some were predicting an intelligence singularity, a sort of apocalyptic moment where humanity was permanently outclassed by its own technology, recent trends are beginning to suggest that, if anything, we might be headed for a human singularity where we learn just how much we really are capable of.


Anti-Buzz: Malaysia Airlines Flight 370

newface-620x461Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

Last week I discussed probability in regards to something fun – the NCAA basketball tournament – but there’s another current news item that is thrusting statistics and probability into the mainstream, albeit a much more somber one: Malaysia Airlines Flight 370, which vanished mysteriously and as of this writing still evades discovery.

When the story first broke, the old needle-in-haystack metaphor was trotted out, with the additional complication of “We’re not even sure which haystack to look in.” In this case the haystack metaphor is more apt than you might guess. Consider a flattened map of the surface of the earth, and then consider that you add hills to the map, not by observing where the actual real hills are, but by raising the altitude in places where you thought it was more likely for the missing plane to be. The visualization might look sort of like this:

So when the search first began statisticians, given what they knew at the time and which “probability map” they could draw, were more or less staring at a bunch of haystacks, with no good idea of where to begin.

You’ve seen these sort of probability maps before, even if you don’t realize it, unless you’ve never looked at a bell curve before. These curves are representative of one type of probability density function; you’ll notice it sort of makes a “hill” over one location. This is the one dimensional version of what I’m talking about; one axis of possibilities, with altitude adjusted for likelihood.

As more information has come out, they have reduced the number of haystacks, as well as moved them around. As of this writing, they have just moved the search area to a specific patch of Indian Ocean, off the west coast of Australia.

Remember what I said last week about the most likely thing being itself unlikely. I’m sure some statistician out there has a model of the situation and could point to an /exact/ spot on the Indian Ocean and say, “This is the most likely place for the plane to be,” and they could even be right, (about it being most likely), and yet the odds of the plane being exactly right there would be pretty long.

As with the NCAA bracket, you can improve your odds if you are allowed to select a region of possibilities. 15 million educated guesses at the NCAA bracket were still not even close to good enough to win, but statisticians can still map out a region of the ocean and make assertions about how likely the region as a whole is.

The more certainty you require, the larger the region becomes. I could, off the top of my head, draw a region that includes the missing plane with 100% certainty, and so could you; that region would be the entire surface of the earth. The fact that somebody has narrowed the search down to “probably in this big area” is actually pretty amazing.

The other tech story coming out of this tragedy, the one that maybe hits home a little harder, is how, in this age of smart phones and GPS and cloud storage, can a plane get lost like this in the first place? I’m not qualified to answer that question, other than to say that “always on” tracking of every plane ever even when something goes wrong like it did here sounds to me like an infrastructural nightmare, and perhaps not even worth the expense. However, less excusable, is the fact that the flight records are trapped on this lost plane. Bandwidth for planes is expensive, but it seems to me that “the cloud” should be able to grab hold of at least a partial back up of flight records and keep them here on terra firma so that we don’t have to actually find the black box at the bottom of the ocean before we can ask it how it got there.

Anti-Buzz Tablet Computers

Anti-Buzz: It’s so easy.


Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.


The Buzz: Tablets are going to kill the traditional PC

The Anti-Buzz: No, it’s true.

If you are my age it is inevitable that the topic of kids and tablets come up; because you have kids and you have tablets, or you have friends who have kids and tablets, or you have friends who are disgusted that they have friends who have kids and tablets. Invariably, late night conversation turns to some anecdote about a 1-year old teaching himself to watch his favorite videos on his mom’s iPad – which I’m not so sure how amazing this is since when I was very very tiny my top priority was figuring out how I could watch Star Wars in a continuous loop.

Opinions on the matter vary, and are conditioned on how old the kid is, and what they are allowed to do, and whether or not they are reaching for their parent’s tablet or their own. Studies on the matter are nascent and inconclusive. If you’re keeping score, this is also the first time people of my generation are revealing themselves to be old-fashioned kooks. We are inherently biased, for lots of reasons, against any child having something we didn’t when we were that age. There are numerous legitimate concerns in regards to kids and tablets, but part of why it feels wrong is nothing but in-my-day, uphill-in-the-snow-both-ways, cane waving.

I can’t and won’t expound too much on child psychology, but this phenomenon of toddlers who are unbefuddled by tablets can lend some insight into why the devices are so popular. To wit, they’re so easy, even a baby can use them. That was once a figure of speech. With tablets it’s actually true. Exploring what that really means breaks down to two things: 1) the interface is friendly to illiterate people and 2) the interface is highly intuitive.

When I Googled “why do we like touchscreens?” (admittedly a bad query) I got these results; essentially that is just a list of articles written by cranky 30-year olds waving their canes at touchscreens. They are wrong. All of them.

Much is being made of how mobile devices are killing the PC market. And I say that it’s true, but not so much in the sense that most hype dealers want to spin it. Yes, Intel’s sales are down. Yes, Internet traffic is increasingly mobile device driven. But saying that the PC is dead is like some fashionista declaring the end of pencil skirts or this guy declaring that this bar is over. It’s a hip, wild thing to say, but it belies a certain emphasis on the popular.

I say it’s true enough that PCs are going to die as a casual computing platform. The influx of new PC users that were encouraged by the convenience of the Internet, (but only the convenience of the Internet), don’t need a whole danged computer. Complex work still warrants a full computer, and always will. At the very least, the infrastructure and software that makes your mobile experience so convenient, the graphic design that makes it so colorful, the video editing that makes the content you love possible, these all need powerful machines with complex input modes. The PC isn’t going to die, the popular kids are just done inviting it to parties. From the popular kids’ point of view, this is the same thing as being dead, but what do they know?

In a way, things are returning to how they used to be. The tactile world of the tablet makes your experience less like a computer and more like fussing with paper and maps and real physical objects. Using a tablet is in some ways like doing things you used to do before tablets. The PC, meanwhile, is returning to the loving embrace of the dedicated computer nerd, and I’m sure that, quietly, the computer nerds wouldn’t have it any other way.


Anti-Buzz: The Smartphone RMA

newface-620x461Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

I will admit to a small anti-buzz hypocrisy. My job is to cut away generalities and make your understanding of technology more concrete. The best way to make an idea stick, however, is with a story, and stories are the grossest of generalities. Today I’m telling you a story. It’s the story of how you were all tricked into embracing the Unix design philosophy.

I hinted at this before when I touched on Windows 8. I also call your attention to my previous lionizing of Steve Jobs, or actually, to the interesting Malcolm Gladwell article I linked when I did. (Side note: When I originally linked that article, I unfortunately attached a link to the last page of the article, possibly robbing you of the entire experience). The key analogy I’m looking for is the three stage evolution of RMA as explained in that article. The synthesizing of information technology into warfare was first imagined by the Soviets, then prototyped by the United States, and finally used to devastating effect by Israel. The evolutionary track of the mouse, (and by proxy, the personal computer), is the birth of the idea at Standford, the implementation of the idea at Xerox, and finally the hot battle of commercializing it at Apple. Broadly: theory, practice, popularity. This track too explains how you were tricked into embracing something you didn’t even realize you embraced.


The birth of ideas often requires freedom from practicality, and freedom from pressure to produce results. Theory is exploratory, involves the study of known data, and despite ambiguous objectives, is rigorous. Good theoretical results are ones that show that a new idea is justified by existing principles. The priority is discovery, not implementation. The colloquial use of the phrase “in theory” is not far from the truth. After much analysis, the Soviets showed that, in theory, one could leverage electronics and information technology in warfare. Douglas Englebart at Standford showed that, in theory, we could slide little pieces of plastic on a table to move a pointer around a screen.

The Unix design philosophy was an idea meant to improve the quality of software tools available to software programmers. The idea, in a nutshell, is to focus on small pieces of software that do one thing well. This maximizes the power given to the user; complex tasks are accomplished by the clever merging of programs. Unix users build new small programs that do one thing well out of existing small programs that do one thing well. Or at least this is the idea.


Prototyping, or proof of concept, is the realization of the idea. The institution that has the resources to build new ideas does not always risk the resources to discover them. The United States built the tools of modern warfare. Xerox made the mouse and the personal computer a reality. This prototyping has no end goal. The United States had no pressing military conflict. Xerox had no inroads into the PC market, (but their invention justified the adoption of their other famous innovation: the laser printer).

The energies of programming enthusiasts built a lot of the small programs that populate Unix environments. More over, they organized the distribution channels for this software, creating public repositories, and standardizing the way software would be installed on any Unix system. They invented a fast, streamlined system by which users could download small, single-task-oriented software. Some of you already know the punchline. What they did is they invented the app store. But they were a group of enthusiasts. Power users. Programmers. Not the electronic laity. Even for all the inviting and egalitarian sentiment in the open-source software community, the inventors of the app store were serving themselves, not the public.


The next key step is refinement and efficiency. This requires conflict and constrained resources. Israel is a small country surrounded by belligerents. Apple was the hot young tech firm with a desire to put computers in homes, not offices. Apple again was the innovator, but the conflict and resource constraints came from elsewhere.

The cell phone market was a feature race. Cameras. Texting. Web browsers. The conflict was the feature race, and the constraint was that it had to fit into your pocket. You can’t put a computer in somebody’s pocket with the old big-software-suite model of software development and distribution because you can’t put an installation CD in a cell phone and you can’t expect a tiny computer to tolerate the rigors of Microsoft Word. With space and processing time being so tight, the discovery and installation of new software needed to be controlled – the PC paradigm at the time was to just grab executables with your web browser and install them wherever and however you wanted. The infrastructural innovation of the Unix community is what enabled the smart phone.

I also do not think it is so much of a stretch to draw the connection. Steve Jobs, upon returning to Apple, overhauled the Mac OS. Mac OS X is built on top of an open-source Unix distribution. Much of the rebranding of Apple that happened during the second Steve Jobs administration was done while under the influence of Unix. The enthusiast community had been spending decades mastering simplicity in software, the Xerox PARC of its time in a way, and once again Apple remanufactured the ideas into something you wanted to put in your home.


Anti-Buzz: Traveling Salesman (Again)

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

So here’s something fun.

I’ve talked about the travelling salesman problem before. On one hand I should probably be happy that a serious Computer Science topic is given its own movie and sexed up for the general public, but I will be honest, this trailer makes me cringe.
This is anti-buzz after all, so let’s get to the task of dispelling the untruths and misdirections.
My first thought upon seeing this trailer was that it was going to unleash a new army of trisectors. What is a trisector? There is a famous old essay popular among mathematicians: What to do When the Trisector Comes? You do not have to be very math literate to appreciate it, and if you chose to just read that article instead of mine, I would not be offended.
In short, a trisector is a mathematical crank. A non-mathematician who for some reason gets it in their heads to either solve some daunting, open math question, or worse, one who attempts to ‘solve’ something mathematicians have already proven to be impossible. ‘Trisector’ is referring to someone trying to trisect an angle using only compass and straight edge, something the ancient Greeks proved impossible, and something that is largely irrelevant today because we have means of trisecting angles with better-than-ancient-Greek tools.

This trailer exaggerates the impact of finding an efficient solution to Traveling Salesman, but what upsets the crank-wary part of me is that it’s not entirely based in fiction; indeed a simple explanation of what’s really at stake does sound a bit like mathemagical sorcery: If an efficient solution to any NP-Hard problem, (Traveling Salesman is one such problem), is discovered, via reductions, this would in turn solve every other NP-Hard problem. Many NP-Hard problems are practical and have real-world implications; so the impact would be real.

The real scientific issue at stake is the question of P versus NP, two classes of problems that have different definitions but, maddeningly, nobody has proven are not equivalent. If they are not the same, then there are no efficient solutions for the NP-Hard problems, and if they are the same, then there are. While most scientists believe that P does not equal NP, the implications of them being the same are attractive to cranks: even now mathematicians are flooded with alleged proofs resolving the issue.

newface-620x461However, in the context of P versus NP, ‘efficient’ is a very loosely applied term. The layperson’s idea of computational efficiency is far more demanding than the bounds of poly-time algorithms. To put it in everyday terms, the difference between P and NP-Hard is the difference between years and centuries. Consumers want things done in less than a second. ‘Efficient’ solutions to Traveling Salesman would have a broad impact, but they would not lead to ‘everything can be done in a second’, but any other definition of efficiency means nothing to a consumer.

This misrepresentation of computational efficiency is a chronic problem; the average person has no concept of what really hard problems look like. The trailer suggests the film has no desire to rectify this, as the analogy used in the trailer once again misrepresents the issue. Finding a coin in the Sahara is not a complex problem. It is only time consuming because there is so much sand, but the solution is simple: look through all the sand. The ‘glass’ solution eliminates one dimension – an improvement – but we were already well within the bounds of efficiency, by our loose definition of efficiency. The really hard problems are not ones with lots of sand, but ones that are difficult even when you have very little sand to look through.

I’m also not sure why the trailer has the image of a drill entering somebody’s head. I don’t see how that becomes more efficient as a matter of P versus NP. We really don’t need to be giving the cranks more to be excited about. They already think what they are doing is magic.

Exaggerating the impact of traveling Salesman is forgivable: it’s a movie. Less forgivable is the anti-intellectual bent it has.

The trailer takes a turn for the uncanny with its use of the word ‘simplified’ presented not unlike some high-tech ad, preying on everyone’s fear that high-tech companies are luring us to our doom with their veneer of nice, simple, consumer-friendliness. People want to hate Google and its ilk for the influence and power they, even though they’ve pretty much only ever used it to make our lives better. People want to believe in stories that say they shouldn’t trust that.

I would rather we weren’t so cynical, but that is a tall order as long as computer science remains mysterious and magical to the average person.

Anti-Buzz Internet

Anti-Buzz: Judo


Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

When I’m thinking about modern technology but I’m not waxing philosophic about it here, I’m usually thinking about digital products – music, movies, photos, games, software – and the economic reality behind them. Occasional soapbox moments aside, I don’t delve into these topics much here because it has seemed to me that the subject wasn’t relevant to my audience. I also avoid the topic because arguing about intellectual property rights and effective business models in the digital age is pretty much Internet Arguments 101. There is a whole universe of discussion on the matter of piracy and the impending paradigm shift in notions of intellectual property, and talking about that stuff is pretty much the easiest way to make an adult look like an idealistic college kid. When I sit down to write these articles I make a good faith effort to say things that nobody else is saying.

But really, the reason I don’t talk about piracy much is because I know you aren’t worried about it. People don’t pirate dentistry. You perform an expert service, and that can’t be faked or copied. On the surface one expects your tech interests to lie mostly with how your day-to-day operations can be changed for the better, and secondarily, the subtleties of what those new technologies mean, (such as how less time needed to manage your appointment book means that your receptionist takes on new responsibilities … or fewer hours).

And all this build up isn’t to justify some rant about how I feel the 21st-century-should-work and what-you-should-think-about-digital-goods. Far from it. It’s rather that I realized recently that my opinions about all that stuff behind the curtain is more applicable to you than I thought it was. Without engaging in college-kid diatribes, here are two bits of advice from the piracy battlefield:

Don’t Make it Hard to be a Customer

My favorite thing about Steve Jobs, (and understand that while I lionized him here, I am not without criticism of the man), was that he didn’t treat his customers like a bunch of selfish thieves. We live in a world were it seems every large company invests in some large infrastructure that is meant to make sure you can’t possibly steal their digital product ever. In an idealistic crucible, Apple is hardly innocent of this, but I do remember Steve Jobs explaining the decision to make iTunes with, “People don’t want to break the law.” He made iTunes in the face of Napster because he knew that the convenience of downloading music was more important than whether or not it was free. Again, iTunes itself isn’t above reproach, but Apple’s popularity over the past decade can be explained by its commitment to making the customer’s life easy. Microsoft makes you feel like a thief. Apple doesn’t.

A common mistake in the fight against piracy is that companies make it more difficult to be a legitimate customer. Don’t do this. Ask yourself, how easy is it to be one of your patients? How easy is it to find you online? How easy is it to make an appointment? Can appointments only be made over the phone? If so you are losing new patients to every dental office that let’s you make them online. Don’t believe me? Ask yourself how excited your are to be on the phone with a stranger right now. Ask yourself how a practice that only takes new appointments in person competes with a practice that takes new appointments over the phone.

Ask yourself how long customers spend in your office outside of actual treatment. How long do they spend filling out paperwork? How long does it take to schedule their next appointment? How long do they wait? /Don’t/ make it hard to be a customer. A lesson from the piracy battlefield is that people will choose what is more convenient over what costs less money. Trust me. Be convenient.

Another way I like to put this is: Don’t put yourself between your product and your customer. If you are politically minded, you can equate this to [not putting a bureaucrat between you and your doctor] or [not putting a bureaucrat between scientists and their science], whichever line you are willing to listen to.

newface-620x461Don’t be a Jerk

Which is the politest I can put it. Another audience gets crasser words. An extension of the previous bit of advice is that being likeable is more important than being secure, (I think there was even a movie about this). People give money to things they like. If 20th century advertising taught us one thing, that’s it. Counterpoint to large companies making sure you can’t not spend money, is small companies, (observation: you are a small company), ensuring that you want to spend money, which is arguably more important. Again, I’ll spare you the college-kid anecdotes, just take the advice to heart; and again consider Apple, who wins by convincing people to like them.

But what are you doing to be unlikeable? That’s hard to say. There is a fine line between being lenient and easy with customers, and indulging them beyond feasibility. A good many entrepreneurs makes of point of pleasing the customer, but eventually you draw a line that protects your own interests, and where that line is is not obvious.

So consider this instead. Why should you care about being likeable?

I said you weren’t worried about piracy. I was right only in a literal sense. Where digitals goods providers are “freaked out” by piracy, you are “freaked out” by something else.

You know what it is.

Social media.

Where the music industry is concerned that somebody, somewhere, might download an mp3 without paying for it, you are worried that somebody, somewhere, might bad mouth your business for no good reason. You can’t stop all of the pirates any more than you can stop all of the cranks complaining about your business. But you can minimize. You can discourage negative behavior by being positive. I don’t have concise advice for how to be likeable, but at the very least, don’t be a jerk.

The metaphor is Judo; using your enemy’s momentum to your advantage. You can put effort in leveraging social media directly, but more importantly, you just don’t want to get crushed by its weight. The highly connected opinion-of-everybody is a tremendous force you can’t stop. Don’t be a jerk. Don’t give it anything to complain about.

Anti-Buzz Future Tech General Hardware

Anti-Buzz: Moore’s Law

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.


newface-620x461The Buzz Word: Moore’s Law Sometimes I feel the anti-buzz tag line invites a rude attitude and I want to skip it. This is one of those times. Moore’s Law is a very popularly understood concept, and there is nothing wrong with this popularity. Discussing Moore’s Law in the context of buzzwords is merely meant to be fun and informative, not a lesson in how “wrong” the average person is. That said, Moore’s Law is a buzzword; remember that the defining characteristic of buzzword isn’t strictly about accuracy, but about application that is broader than originally intended. The law takes many forms, but a minimalist, popular definition could be “Every 18 months, computing power doubles.” Moore himself was, strictly speaking, talking about the number of transistors that could be fit into the same amount of space. Despite a strong connection between the number of transistors and the performance of digital electronics, Moore was not directly addressing the falling price of hard drives or memory, or the mega-pixels in your camera, or the number of users on Facebook; yet every time something doubles, somebody wants to cite Moore’s Law.

Moore himself has joked that by the sound of it, you’d think he invented the exponential. This is an understandable reaction by the public. Transistors-per-square-inch is an obtuse thing to appreciate, and even when you draw the connection between that and processing power, people want to generalize this to a discussion about every statistic that could ever be used to describe their computer: RAM, screen resolution, storage space, battery life, you name it. Moore’s Law also puts a name to the problem the public has had with computing for the past two decades: it changes too fast. Moore’s Law gives you someone to blame. These popular generalizations are fine and not even that inaccurate, but what’s the real story behind Moore’s Law?

The True Moore’s Law: Gordon Moore, co-founder of Intel, was one of several who, in 1965, observed a trend in new integrated circuit technology. He stuck his neck out, and famously predicted that the number of transistors on one microchip would continue to double every year for at least the next ten years. In 1975, he reviewed the situation and revised his statement, claiming that they would continue to double every two years for the foreseeable future. Speaking strictly from the transistors-per-chip point of view, Moore’s two-year
Law still holds today. Interestingly, Moore never once claimed a turnover of 18 months, and never argued for anything other than the prowess of manufacturing technologies and how many transistors they could cram into one place. One notable thing to point out: Moore’s Law hasn’t held true for all these decades just because of some crazy computer magic, but because the integrated circuit industry, especially Intel, has used to law to guide their
long-term strategies and have seen it as something of a mandate for how fast and hard they need to improve their technology.

Power Doubling Every 18 Months: So where does the turnover of 18 months come from? This is the popularly understood time. An Intel colleague of Moore, David House, sought to give a more applied definition of Moore’s Law. He stated that the practical implications of Moore’s Law were that CPUs would double in speed every 18 months. This claim is much easier to digest if you are an outsider, but it is also more narrow; quite a bit more than processor speed matters to your machine’s performance. I think we all bandy about with “Moore’s Law” and not “House’s Law” because Moore’s Law impacts all facets of computing technology, which is what we need if we’re bringing mega-pixels and video RAM into the conversation. House was strictly speaking about processor speed and his law has become increasingly hard to judge in the past decade, with multi-core CPUs really muddying our ability to account for processing speed.

Beyond the CPU: So, Moore’s Law does impact the number of mega-pixels in your camera and the cost of your hard drive, but it is not a directrelationship. Not everything benefits from more transistors the same way a processor does. We vaguely wrap “number of transistors” up in the term “computing power”, use House’s number because it’s a bit sexier and, honestly, it’s close enough to correct that pedants like me shouldn’t really care. The important thing to understand is that most aspects of computing technology improve exponentially fast, which I suppose is yet another obtuse formulation. The other key concept is that processing power isn’t the only thing that matters, (The marketing campaigns of processor manufacturers have spent the last two decades convincing you otherwise), and “computing power” is a little more amorphous than just how many hertz your computer can flop over.

How much longer will Moore’s Law hold out? Nobody quite agrees, but there is good reason to suspect that it won’t last that much longer. Even with that knowledge, don’t get sucked into popular doomsday notions of what that means. When Moore’s Law finally fails it doesn’t mean that computers will stop getting better. In fact, it doesn’t even mean they will stop getting exponentially better – a common estimation is that improvements will only slow such that the number of transistors will double only every three years. That’s still pretty fast, and who know how many decades that will last.

Anti-Buzz Tablet Computers

Anti-Buzz: The End of Large Software

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.


newface-620x461I have spent many of my recent articles trying to make some sense of the smartphone/tablet boom, but what I haven’t done is really dig into what I think it means for you, a practicing dentist.

I think it is fair to categorize this move to mobile devices as a renaissance – that is to say a rebirth of the computing revolution.

You went paperless? Good! You have a website? Good! Computers in every office and treatment room? Good!

Now tear it all out, we’re starting over.

That’s what it means to you. Not very anti-buzz of me, but that’s what I think you are looking at. You might be able to make like the publishing industry and forestall the disruption – your industry has the means of sandbagging – but the day is coming where you will put a tablet in the hand of every employee and hold the expectation that your patients will have already checked in on their phone before entering your office, (perhaps forwarding you a note of their location when they do so you don’t jump the gun). You will visualize their circumstance with photos and x-rays and charts, passed to them on a tablet which they will flip through in the exam chair, paging the items with their fingers, (not unlike paging through the old records of yore, ironically). When finished they will finger in their next appointment on a calendar sync’d to respect both your time and theirs. The bill will be e-mailed, and they will pay on the spot, right there in the treatment room, either swiping their card down the side of the tablet, or just routing through PayPal with the press of a few buttons.

This will happen. When? That’s trickier.

Why It Will Happen

If you think the above narrative is dubious, I would guess it would be for one of two reasons.

You can’t run a dental practice on a tablet – Which is literally true, in the same way that you can’t run a dental practice on a keyboard. The keyboard alone certainly doesn’t have the storage capacity for your records and neither does a tablet, but they are both just interfaces. Your records are still on a big computer somewhere, (possibly not in your office). I am also not suggesting that your practice would become computer-free, just computer-light.

Patients won’t want to be so involved – which may well be true of some of them. You will still have a front desk, where your patients will be free to pretend that it’s still 1980 if they want. But there is a bigger picture you are missing if you think this level of customer interaction isn’t on the horizon. An increasingly large portion of commerce is being driven not just through the Internet, but through mobile devices specifically. The triumph of small software means there is an app for everything. People enter a business and compare prices, real-time, with other stores near them, (there is an app for it). We are already beyond the comfort threshold – that is, customers are already comfortable with this. We are looking at expectation. Customers will soon /expect/ that they can enhance their experience through their mobile device at all times.

apple-ipadThe difference between a practice that allows patients to take some control on the small screen and the practice that keeps all knowledge guarded on “professionals only” machines is the difference between an inviting, personable practice, and a stereotypically sterile one. You used to keep your appointment book behind the front desk because only your receptionists were trained to keep it straight. Then you kept it squirreled away on the treatment room computers because you couldn’t expect your patients to know how to use your software. If everything is handled on a light, intuitive touch interface, there really is no reason not to let the patient pencil themselves in. A patient who physically sets their own appointment is going to feel more invested in returning for treatment. And while I understand that many offices are already rife with tools to present information to patients, handling a patient the means to explore their own records and information offers a touch of humanity, makes them feel like they have more control, and will ultimately make them feel more comfortable with you and the services you offer. You want these changes to happen.

Apart from infrastructure, the real change is that practice management software is currently designed for use by you and your staff, and it will need to change to be used by you, your staff and your patients.

Why It Won’t Happen

As said, as an industry, that you are capable of sandbagging. The sort of mobile revolution I’m talking about is going on right now in the retail industry, but there we are mostly talking about large national chains. It makes sense that these corporations would either hire engineers to build a solution, or contract with a large development company to build one for them. You are a small business, and aren’t really in the game of revolutionizing practice management tools. You indirectly contract engineers by purchasing practice management software, but ultimately you are just a customer, not a true partner. It is possible that the changes I’m discussing will come naturally. Perhaps the types of packages I’m describing are in development right now! But much as the stubborn among you were able to hold out against “computerizing” your office, you can again hold out against “tabletizing” it too. It would be a mistake, but it would not be an immediately dire one. It’s really a matter of how forward thinking you are as a group, because the types of solutions you are offered will be reflective of that.

If the developers of practice management software do not move away from the increasingly archaic large software suite, they will be prone to be disrupted by the first company to take the risk. Right now the industry enjoys being composed almost entirely of small businesses. There are no ubiquitous dental chains. But if, as a group, you stall out on modernizing again, you could risk being supplanted by anyone daring enough to try putting the sleek “mobile” dental experience in offices around the country.

Anti-Buzz Future Tech

Anti-Buzz: Modeling the World

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

The Buzz: You can’t beat human intuition.

The Other-Buzz: You can’t beat the hard truth of math.

“The plural of anecdote is not data” – an aphorism of somewhat uncertain origin. I will still begin anecdotally: when I was much younger, there was a joke about “them”. They told you eggs were bad for breakfast, (cholesterol), then they told you they were good, (protein), then they shocked you by telling you it was ideal to be 20 pounds overweight, (they later told you this was not actually true). They told you what the world population would be in 2050, the precise moment we’d run out of room for our carbon footprint, (or whatever), the probability that we’ll die of a heart attack, the probability that we’ll die in a car accident, and the log-likelihood of the Chicago Cubs winning a game on a windy Tuesday against a left-handed pitcher.

“You know, they say you shouldn’t swim after eating.” “You know, they say Shakespeare didn’t write all that stuff after all.” It seemed we were always talking about them. “Who are ‘they’, anyway?” That was the joke. There were a few decades when we were all just connected enough to be innundated with the statistical analysis of the world, but not connected enough that we could be gaurded against it by snarky basement bloggers. “They” aren’t quite so loud anymore because we’re all much louder. Or I’m just older. Personal anecdote after all. In my day we had to go uphill both ways through nutritional studies and actuary tables.

I’m becoming one of them. Yes, one of “them.” The they who tell you, with absolute uncertainty, what the world is really like and how it works.

Anti-Buzz Internet

Anti-Buzz: Anti-Internetism

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.


newface-620x461The AntiBuzz: Anti-Internetism

—About ten years ago I was studying film in college and we were lectured on how ideology can be encoded into art. “Ideology” is itself a tricky thing to define and trying to do so here could lead us too far astray, but one salient point was that ideologies seek to become invisible – that is, you are typically unaware of them. Successful ideologies feel like scientific law, their veracity so clear that you no longer want to call them ideologies, “ideology” being some word used to describe the culture and morals of those foreign to you, or those who hold positions you don’t like.

The naturalized nature of ideology is what allows fiction to make heroes and villains without having to explicitly define them as such. When a book or film is “trying to say something” and says it a bit too hard, we criticize the work for being pedantic, or preachy, or sappy, but what we are really reacting against is the uncomfortable visibility of the work’s ideology. When we have trouble relating to something foreign, it is in part because we are not fluent in the core ideologies that the work is built on top of.

The real key, however, is that you can go a long way in analyzing a piece of art not by looking at what it says, but by looking at what it assumes.

—About three years ago we all saw the first wave of bankrupt newspapers. This was on the heels of the 2008 financial collapse. For legal reasons, I think it best I just provide links rather than reproduce the images here, but consider this political cartoon from April 2009. Also, in accordance with my non-confrontationalist policy, I will tiptoe around the details of the financial collapse, but the underlying belief of this cartoon is that newspapers are a “scientific law” constant of our society, that they represent the public voice and the public discourse, and that their failure is the harbinger of a new voiceless American public. The argument of the cartoon is that the flailing newspaper industry is another piece of the big “injustice” pie. Considering that I’m basically just some guy and I get to blab at you each and every week, I’m not too sold on the idea that bankrupt newspapers are impeding my “voice”. I think we all understand that newspapers in general are not a sustainable business, and that a financial crisis only hastened their demise, but they were already doomed anyway.

Here’s another cartoon from the same era. This one is much crasser. Where the first cartoon is just a case of misguided mourning, this one seems to imply that my generation, by not having attended the right college, is a class of thoughtless dunderpates. But that’s what it says. What it’s saying is pretty clear: Internet culture is neolithic. But what it assumes is what is more telling: that shuttering public discourse behind the filter of “publishing standards” made for better public discourse and that a new society that enjoys freedom of speech in both word and deed is going to be besieged by blithering idiots. It’s ivory tower cultural elitism, the odd 20th century belief that anything available to the masses must be low brow or dangerous.

—About a year ago I was returning home from Europe, and the flight was sufficiently long that avoiding the in-flight entertainment was not an option. As sort of a guilty pleasure, my fiance and I watched several episodes of a new police procedural drama. This was as close to “normal television” as I had gotten in a long while. It was clear the show was aimed at people 20 years older than me, but what really stuck out were the underlying beliefs about the Internet. Bloggers are unironically ridiculed at press conferences. Police Commissioners agonize over social networks spreading public panic and otherwise irresponsibly handling delicate information. Villains record their crime sprees and post them on YouTube. Other villains learn how to make bombs from the Internet. The Internet is a big problem for law and order. This was one of my first inklings that traditional media was being overrun with anti-Internetism.

I’ve seen it elsewhere. Talk show host Craig Ferguson likes to say that Wikipedia is “wrong”. Aaron Sorkin’s drama Newsroom has a token Internet journalist who is routinely ridiculed for taking any non-traditional media seriously. Most recently, I watched an episode of Saturday Night Live where a Weekend Update segment spent too many minutes on a gag about rude social media users being taken just as seriously as traditional sources, (“too many minutes on a gag” also being a good summary of Saturday Night Live). I’m sure I could fill up a book if only I had the stomach to watch more traditional television. I already encounter the anti-Internet ideology far too often considering how little television I watch.

I’m ranting about it here largely because I feel the sentiment is completely naturalized – that is, it is “invisible”. I think one could watch a lot of television and not “get” that nearly every show takes a stab at Internet culture. I’m hoping that by pointing it out, it might become more visible.

—This week, in the United States, we celebrate Thanksgiving. I should tell you here that, among other things, I am thankful that we are moving closer to a a true freedom of speech, where everyone has not only the right but also the means to voice their opinion. I am thankful that the cynics are wrong, and that a typical user can patrol the cacophony of content and make responsible choices. I’m thankful that the “Lowest Common Denominator” is just an ugly myth perpetuated by a generation of over privileged media producers. In short, I am thankful for the Internet.


Anti-Buzz: Corollaries

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.


newface-620x461When was the last time somebody called your office and asked for directions? When was the last time a patient showed up late, (or not at all), because they got lost? Not everyone has a smart phone or GPS and, (somehow), not everyone can look up directions from home, but by and large, these scenarios seem like an absurdity. Not that it proves anything, but when I Google “asking for directions” I get a bunch of pages trying to teach you English; I count this as evidence that asking for directions is a fading institution.

Getting where you want to go has a lot less friction than it used to, and that benefits everyone. It used to be that if you were running errands and you wanted to tack on an extra stop to a new place you had never been to, that this was potentially a huge hassle. How do you get to this new place? You could hope somebody else knew the directions, or if you had the address you could look it up on a map, (a real map – the kind that doesn’t know where you are or plot your course for you). And if you had neither of those things? You could call the new place and ask. But how do you call them? You hope your first errand stop would let you use the phone. And you’d hope they had a phone book too. Or you’d find a pay phone. And even then you might still end up having to orient yourself with a map. Or you’d just give up and go home.

The people who put effort in improving telecommunications weren’t trying to make it easier for you to run errands, but they did. And every business that might have lost a customer to the uncertainties of the above process no longer does. The improvements in communications technology were also not intended to help drive customers into stores, but they did. The true impact of technology is often corollary to the original goal.

Ideally, your patients only visit you twice per year, so you benefit from this “never lost” effect less than others, but even so you are spending less energy offering directions and phone lines, (and phone books), to lost patients than you used to.

The lesson is that the benefits of technology are not always straightforward. This is why it can be problematic to look at something new and demand to know how it benefits you. Social media, as usual, is the go-to example here; trying to anticipate its impact is just as tricky as anticipating anything else in the past 30 years would have been. The last three decades have been full of the unintended benefits of frictionless computation and communication. Even the visionaries didn’t completely see what was coming.

There is an element to modern media that do their best to punditize technology, to throw around news and tech predictions with some amount of swagger. It is the same swagger that fills any pre-game show. It is the same swagger that elements of the media had back when “computers” and “fad” could be uttered in the same sentence. It is the swagger of false expertise, of understanding the status quo and trying to extrapolate from it. For my very small part, I am a part of this media too. I do my best to talk about popular technology in an intelligent way, but when it comes down to it, I’m merely guessing, just like the rest of us.

There is the very old saying that necessity is the mother of invention. If you were uncareful, you might think the last 30 years runs counter to this. Who “needed” Youtube anyway, right? But consider the add-an-extra-errand debacle described earlier. Nobody in that situation assumed there was a solution to the problem. More accurately, nobody in that situation assumed there was a problem at all. It was just the way things were. But as soon as that process began to lose friction, people latched on. There was a need driving the innovation, it just wasn’t obvious. And that’s the real lesson and the real source of innovation. Do not look at something new, tap your foot impatiently, and ask what it does for you. Instead, find more problems. Look at your processes and policies and ask where they can be better. What is your least favorite part of your week? Can you make that better? Solve the problems you didn’t know you had and, perhaps corollary to solving them, you will discover the benefits of a new technology.

Anti-Buzz Security

Anti-Buzz: Patient Records Revisited


Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

BIG NEWS, We may have to wait a bit longer for some new Anti Buzz articles as Andrew just became a new father. Congratulations. That means that I just became a Grandfather. 🙂

Anti Buzz: Privacy in the Internet Age:

newface-620x461The privacy conversation has more legs than I anticipated, having already found myself exposed to many stories and opinions on the subject. In early 2014, “predicting” that privacy is going to be a big deal in the near future was a safer and simpler guess than I would have anticipated. But this is good for me, and for you, because I have a lot to say on the subject, and dentists have more at stake in the conversation than most people. Expect more privacy-centric conversations in the future here. This week: a practical map of what the concerns are for your practice.

First, a lot of what I will say today isn’t really new to this blog. My father has discussed electronic dental records many times before, and I’ve chipped in with my own perspective. My father was keen enough to the ambiguity of “ownership” before it was popular discussion. And most recently, of course, is HIPAA and what it could mean for you. In short, the records you keep on your patients are a hot commodity.

As somebody who increasingly fancies himself a scientist, I am very sympathetic to the arguments put forth in this TED talk – briefly: We stifle innovation by limiting access to patient records, yet this flies in the face of conventional wisdom and ethics. It is highly unlikely that your patient records are the key to curing cancer, but the truth is that we don’t know what innovations we are missing by keeping things locked up. This much should be easy enough to convince people of by now as the conventional wisdom has shifted far away from technophobia’s famous “Everything that can be invented has been invented” attitude.

The question is, of course, if the benefits outweigh the invasion of privacy, but I don’t actually presume to make up your mind about that. I do presume to tell you that you are going to need to take a position on the matter before too long. I am perhaps getting ahead of myself here. Let’s walk through why your patient records are important, and to whom.


Anti-Buzz: Driver’s Arithmetic

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it. This article from early 2013 0n driverless cars is even more relevant today.

BIG NEWS, We may have to wait a bit longer for some new Anti Buzz articles as Andrew just became a new father. Congratulations. That means that I just became a Grandfather. 🙂

The Buzz: A computer can’t possibly account for all the variable in driving a car!
The Anti-buzz: Driving is only arithmetic.

Early in the year while we are still looking forward, I’d like to talk about another emerging technology – automated cars, (read up if you are unfamiliar with this phenomenon). I’m not going to pretend to make a claim about the development and adoption of this technology – 2013 certainly won’t be the year of the driverless car, and I don’t know which one will be – but the successes of 2012 give me an opportunity to revisit why computers are good for some things and not for others – and perhaps contrary to your intuition, computers would make much better drivers than we ever could. I am always eager to remind you that machines are not in fact very intelligent and people, despite what pop-cynicism might have told you, are really smart. The twist-ending to the story is that driving does not actually require much intelligence.

Driverless cars are one of those “magic” technologies that are either in demand or on the immediate horizon, yet I think its prospects are better than many others. As a foil to driverless cars, let us consider a competing “magic” technology: Voice recognition. From a lay perspective, both of these technologies might seem similarly complicated. In fact, voice recognition might seem simpler than driving a vehicle; after all, kids learn to communicate and talk long before we trust them with a car. The technologies face quite a different set of challenges, not all of them technical, but understanding these differences can be illuminating. I think it best to proceed by answering a string of hypothetical questions.

If voice recognition is so hard, why are we already using it? The answer is that there is a lack of risk. The biggest impediment to driverless cars, really, is that the stakes are so high. If a robot car fails, people die. When a customer service phone tree fails to understand what you are saying, it is a mild annoyance. What voice recognition we do use is unfettered with safety concerns. If you want some machine learning jargon, a problem domain includes a loss function – that is, a way to score success and failure in a way that fits the real world. The loss functions for these two problems are very different. Voice recognition that works 98% of the time is very impressive. A car that makes the right decision only 98% percent of the time is life-threateningly terrible.

Why is driving an easier problem? More pointedly, why did I say driving does not require much intelligence? People don’t want to hear the latter because driving is something they do, and it is not easy. However, people aren’t stupid because they can’t compute a square-root faster than a computer. People aren’t stupid because they can’t store a novel in their head and reproduce it word for word, (but your e-reader can).

The thing about driving is that it is a very well-defined process. For basic operations, there are clear laws, and a very well codified set of symbols that indicate all the special rules governing any location, (Stop signs, traffic lights, white dashed lines, yellow dashed lines, solid lines, lane markers, etc). For the parts of driving that require on-the-fly thinking, it is still mostly a matter of observing objects, calculating distances, “looking afew moves ahead” and avoiding collision. Driving is, in many ways, just a tedious math problem. Doing it safely requires discipline and focus and mental endurance and the ability to not let your mind wander.

If, instead of putting yourself behind a steering wheel, you put yourself behind a notebook and had to scribble out the answers to arithmetic problems as fast as you could, you wouldn’t think twice about letting a computer take over. In contrast, voice recognition suffers from all sort of inconsistent noise. Instead of the regularity of road signs and the simplicity of measuring how far away the next car is, voice recognition has to deal with the fact that people mumble, that people have accents, that they speak different languages, that they sound different in different emotional states, and that each person has a unique voice. This says nothing of the task of actually understanding language. Language is a more idiosyncratic, intuitive, /human/ thing. It’s not so easily codified.

Would a driverless car really be safer? I think so. The reason why really illuminates the difference between what computers are good at and what humans are good at. A robot car would not only know that there was a car ahead of them, but exactly how far away it was, how fast it was going, exactly how long it will take to brake to stop. A robot car would never be bored, have to sneeze, or otherwise take its eyes off the road. A robot car is looking ahead of itself and behind itself at the same time. If the robot car is common enough, it’s talking to all the other nearby cars. Now it no longer has to guess about the car ahead of it slamming its brakes – it would be told directly. The computer would be privy to information a human never would, and it would enjoy the ability to “look” in all directions at once and never be distracted. It realy is just on-the-fly number crunching, and that is not something humans are good at.

If the problem is so simple, why hasn’t it gained speed until now? How long until it is commonplace? All of the above shouldn’t trivialize how amazing the current technology is. First, a lot of what makes driverless cars possible is the ability to give a computer the same sensory facilities as a human. This is no small task, nor is it particularly affordable right now. I downplay the “intelligence” required to drive, but the AI technology needed to successfully “see” things like road signs and painted lines is definitely not trivial. The second question is much harder. I’m very optimistic about the viability of the technology, but less so about its ability to become a consumer product. The day will come, but there are two huge hurdles. One is public opinion – it seems very mixed right now. If too many people refuse to trust the technology, it will be a struggle for it to succeed. The other is legal. Even just the question of liability is enough to delay the technology for perhaps decades – when a robot car kills someone, who is at fault? Driverless cars have been legalized in three states already, but the current law holds the “driver” responsible – the car can’t be unmanned and it is assumed the driver is alert and ready to take control whenever it wants. The dream is to be able to surf the Internet or read a book or do whatever you wish while your car chauffeurs you around.

That last reality might yet be far off, but it wouldn’t be for a machine’s stupidity, I can tell you that.

UPDATE: Related

Anti-Buzz Internet

Anti-Buzz: The Bicycle Super Highway

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it. BIG NEWS, We may have to wait a bit longer for some new Anti Buzz articles as Andrew just became a new father. Congratulations. That means that I just became a Grandfather. 🙂

The Buzz: The new always obliterates the old.
The Anti-Buzz: The old builds the house that the new enjoys.

newface-620x461Cars changed everything. Nowadays we indulge in conversations about fuel efficiency, alternative energies, or just going green and biking to work; all fine conversations to have but maybe it can be too easy for any car cynic to remember just how revolutionary they were. One of several modern transportation inventions, the car stands out over trains and airplanes because it is small, personal and democratic. I can ride a plane from Los Angeles to Australia, or take a train from Manhattan to Vermont, but I can drive my car to the grocery store, the theater, my friend’s house, the woods or anywhere I want. With a car person’s range increases tremendously, and the reach of businesses explodes. We say the Internet is the most important invention since the printing press, but don’t forget that early Internet metaphors referred to a Super Highway. If we want to say the 20th Century ended with the fall of the Soviet Union, (as some historians would have you do), then we can safely say the advent of automobiles, roads and highways was the Internet of the 20th Century.

Now, I’m not here to laud car usage for 1,000 words, nor am I here to ruffle the feathers of any bicycle enthusiasts, (of which I am one). No, the next observation is to consider the state of car-alternatives. For brevity, I will focus on the bicycle. Now, as a technology, bicycles owe nothing to cars. They predate cars, and borrow no aspects of the internal combustion engine, unless you wish to argue for vague notions of “modern engineering.” They even enjoy the same destination-versatility, (like cars, they take you “where ever you want to go”). However, in as much the bicycle performs as an effective mode of transport, it owes almost everything to the car: the paved infrastructure, the greater accessibility of businesses, the availability of bicycle parts and repair services – all of these are pushed by the automobile. The bicycle is better off than it would have been had we misfired on the rise of the automobile.
Perhaps the analogy to cars and bicycles doesn’t resonate with you, but more than likely the idea I floated last week about social networks supplanting your intra-office communication does. The primary contribution of revolutionary technologies is that they drive new infrastructure, and if you’re the sort who still scoffs at social media, you are ignoring that its immense popularity is shaping the infrastructure of the future. It’s reasonable that your present concern with social media is how you can leverage it as a business tool, but eventually the popularity of social media will be less important than the communication motifs it has helped create. It just might be that trying to connect with millions of facebook users is a waste of your time, but it is not absurd to think that we might repurpose the “what” of social media: public visibility of activity, public “wall” style communication, ease of embedding media, topic tracking, private communications both live, (chatting), and asynchronous, (messages/emails) – these are all sensible business technologies.

If you are a tech trend geek, then you would do well to look for these infrastructure moments – and be wary of trend claims that ignore this reality. My favorite among the latter is “The PC is dead”, bandied about in the wake of smart phones and tablets. These micro-psuedo-computers are certainly “the future” but touting them as PC killers is not unlike touting the bicycle as the automobile killer. Tablets are more affordable, better in some ways, and also less powerful and more narrowly designed. Smartphones are great, but they are the bicycles of the Internet. If everybody switched to bicycles tomorrow, we would still be wise to run this nation’s freight on the back of an internal combustion engine. Even as we see “mobile” versions of websites, (mere bike lanes), the PC still drives the infrastructure, and will continue to do so for some time. Thinking we are done with real computers because we have tablets now is sort of like saying we don’t need scientists any more because we have Popular Science Magazine.

I digress somewhat. The importance of something like a tablet is that we have more Internet users because we were able to give them a less obtuse computer. Computers are “for nerds” and somewhat correctly they will continue to be so – regular people can enjoy a smaller, pop-computer that specializes in the sort of things a regular person wants to do, (And it is too bad the car/bike analogy breaks down here). Tablets are great, but they are only as good as they are because they can use the infrastructure built by the computer.

Anti-Buzz Future Tech Hardware

Anti-Buzz: Stylus Pens

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it: This one from last year is interesting considering that one of the major selling points of the new SurfacePro 3 is the stylus.


The Buzz: A tech prediction! The Anti-Buzz is going to make a tech prediction!
The Anti-Buzz: I predict that stylus pens are going to get really popular.
The Buzz:

newface-620x461Groan, I hear you say, that’s a boring prediction. I admit that it might only seem bold in as much as it is very unexciting. Stylus pens? you ask, Why would you care about stylus pens? And they already are popular among among professionals trying to keep up appearances while they play Angry Birds at lunch. And they’ve already been popular among smart phone users, particularly Blackberries. So, yes, my prediction needs to pack a little more punch.

How about this? : My assertion is that stylus pens will be the focal point of the next big interface improvements. In a few years you won’t be able to work your iPad without one. And you won’t want to.

So I really am sticking my neck out here; things might not break this way.

We are at critical mass on touch interfaces, so much that even Microsoft is trying to push them for /desktops/. If I was Grand Czar of the computing industry and, several years ago, had been tasked with trying to get more computers in the hands of “technophobic” casual users, I’d still be shoveling netbooks into the fire and hoping.

I wouldn’t have thought of going sideways and pushing tablets. Ever. But somebody, somewhere, saw smartphones, saw the lukewarm reception to netbooks, did one of those cartoonish double takes where they go back and forth. Smarthpones. Netbooks. Smartphones. Netbooks. Finally – eureka – they understood the truth and their finger shot in the air: “People really hate keyboards!” And I think “not having a keyboard” really is the magic behind the tablet boom. Losing the keyboard wasn’t some compromise consumers made so they could have tinier computers. Losing the keyboard was the selling point. I think for a lot of people, keyboards create stress. They are just a lot of buttons and space.

Indeed, we’re experiencing a new high in pop-computing. The kind of person who was comfortable with computers – would use them at work, might use them at home, check their email, read a little news – was still not among the rest of us geeks who would put ourselves in front of a screen for every waking hour if we could. Now everybody’s in front of a screen. And most of them are touching it.

But something has to break the other way. These new touchscreen interfaces are refreshing, and made using a computer feel like a day under the cabana, but the vacation has to end eventually. The further tablets penetrate the consumer computing market, the more they are going to be expected to help us get our work done.

touch_stylusEnter the stylus pen. Right now, the stylus has a narrow use, its acknowledged function being that it offers more precision for a touch interface. This application is obvious on smartphones, which often sport them to match their smaller screens. But as applications become more complicated, (and serious), there has been and increasing need to move away from the touchscreen’s usual array of fat, comforting buttons. This is the easiest selling point: stylus pens allow for more complex displays and more precise interactions.

But this isn’t all about precision. We also lost the mouse in this shuffle. Easy, cabana-worthy computing doesn’t miss the mouse at all, as it is pretty much replaced by the touchscreen, but there are a few quiet advantages a mouse still has, the biggest one being that it has buttons, (though admittedly one of those buttons simply approximates “touch here”). The stylus pen will replace your finger because we can install buttons and other features into a stylus pen and we can’t do that with your finger. (It’s already happening with Samsung’s Galaxy Note series). Feature-rich stylus pens are in their infancy and I won’t wager on what the final iteration is going to look like, but I can speculate here. In addition to straight forward buttons they might react to you squeezing them, or maybe the back end will have a button to click, (like you do to use a real pen), that will put them in a different mode. We might even see the addition of pressure sensitive surfaces, adding even more variety to our ability to interact with a computer through a touchscreen. Regardless, somebody better at this than me is going to figure out the best way to use the real estate of a stylus pen, and it’s going to make the current run of tablet computers look archaic.

Plus, we can finally stop smearing the grease from our chicken wings all over our screens. That will be nice too.

Anti-Buzz Social Media

Anti-Buzz: Broken Windows

newface-620x461Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it:

No, not that Windows; these. The broken windows theory of crime is not something I would profess to be an expert on, but the idea is applicable in matters of professionalism and technology. My layman’s summary is that most crime is opportunistic, and will be more often found in situations where there is a perception of low enforcement. Broken windows beget more broken windows; their presence signals that “nobody cares” and so the next would-be vandal is more likely to take the risk. Vandalism begets more vandalism, graffiti more graffiti. A disorderly neighborhood feels less safe, making residents more fearful, and as Yoda has assured us, fear leads to hate, hate to suffering, and et cetera until we cross over into the Dark Side. The argument is that keeping things clean and orderly goes a long way to preventing crime.

Even in lighter matters, we are inclined toward a similar ‘community standards policing.’ I suppose it doesnt really matter how u spell it or if you dont remember some punctuation or maybe youre sentence runs two long. Just know that the signal it sends is that you think vandalism is okay. I myself am guilty of scoffing at etiquette and good appearances. Even now I sometimes wish that people could just focus on ‘what really matters’ in a meritocratic sense – who cares how someone dresses if they get the job done? Yes, it may be that someone who is over-concerned with superficial matters of dress and niceties is not a very useful person, but all those niceties do have a use in the end – they reassure everyone around you that you aren’t the sort of person who slashes tires and smashes mailboxes.

Of course, I doubt I need to lecture dentists on the value of professionalism. What I do need to point out is how ignorance of tech matters can leave you with broken windows you might not be aware of. Using technology to reach new customers is a double-edged sword, as it also gives you new opportunities to look unprofessional.

The most egregious example is, of course, overreacting to online reviews. More down to earth advice is to not let any web presence you maintain yourself, (as in your website or Facebook profile – not your Yelp page), fall into disrepair. ‘Disrepair’ can mean any number of things, but mostly just keep the information you have current. The only thing worse than a business with no web page is one that lists the wrong business hours – it is in effect a broken promise. If you told me that your listed hours are correct, I would ask you if you had bothered to post your holiday hours this past July 4? Do you take a week off in December? Is that reflected on your website? It’s a small thing, but as soon as the customer is affronted by some minor vandalism, they start to feel less safe. Your cheery disposition seems less sincere. Your next small mistake is magnified. It is not absurd to suggest you might lose a patient all for the simple fact that you forgot to say you were closed for Labor Day.

brkwinOn the subject of websites; minimal is fine, trashy is not. I have dismissed fears of ‘security’ in the past, saying that it was up to the user to be savvy enough to avoid the metaphorical dark alleys. Another way to look at it is to void broken windows, but in your website design and you general Internet exploration. To come full circle, poor grammar is the hallmark of scam-emails. As for websites: Pop-ups? Unwelcome videos? Bad colors? Only rudimentary html? You know it when you see it. There are would-be reputable websites that break the rules and come across as looking like a dangerous side street. Don’t surf there, and don’t let your own page become one of those neighborhoods.

More front and center are the computers in your office. I would not load up and Macs or otherwise dictate your practice management software simply for how sleek it looks, but you do have to be aware that what is on a computer screen does say something about its user, and your patients will look at your computers from time to time. I wouldn’t agonize over the decision, but desktop wallpapers should match some standard. Deskstops themselves should be free of clutter. No random files lying around because somebody was in a rush and wanted something to be convenient. I wouldn’t think “what does my desktop say about me?” but rather just be wary that the monitors in your office are yet another chance for you to reveal some broken windows.


Anti-Buzz: Wish Fulfillment

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it:

So I like to harp about statistics and science and how they get portrayed in common media. This stuff matters to to you, of course, because the care you provide is, ultimately, justified by studies and findings and conclusions drawn from medical science and statistics. I came upon this joke image a few days ago.


For some context, the Toronto Maple Leafs are a team in the NHL that has, for about 15 months now, consistently done better than stats geeks think they deserve to. That projected season record, if it were to come to fruition, would be the best regular season in the NHL by a mile. Of course, it’s all a joke, because any reasonable person understands that whether or not a team comes from a “C-name” city has nothing to do with the outcome of a hockey game. Besides, The Leafs beat the Calgary Flames 4-2 on Wednesday, so this silly C-curse is clearly hogwash.

The elephant in the corner is that, mathematically speaking, there is nothing incorrect or inappropriate about the above analysis. We “know” that letters in team names are completely disconnected from the cause-effect chains that govern hockey. And yet, there’s no rule that says what factors we can and cannot consider when performing an analysis – if there were, it would be a bane on innovation – and so we are left with the fact that, well and truly, the Maximum Likelihood Estimate for the probability that the Toronto Maple Leafs will defeat a team from a city that has a name that doesn’t begin with the letter ‘C’ is 100%.


Even correcting with a Bayesian prior, (In this case, assume you spot each category a free win and free loss), We still have the Leafs at roughly an 89% chance of beating any non-C team, and only a 17% chance of beating any C team. The projected season record in this case would be about 65 wins, still an all-time best for the NHL. Winning and playing against non-C teams is perfectly positively correlated, with a correlation coefficient of 1. The list of stupid statistics you can generate on this goes on, and none of them are invalid outside of the fact that our intuition rightly tells us they are invalid.

The criteria of C vs. non-C teams is patent nonsense, but this should tell you something about the statistical models we use to make important decisions with: they are still very much a product of human biases. I’m giving you a silly example to play with, but more legitimate analyses are still plagued by the problem that we choose what to examine.

newface-620x461To bring this back around to technology, the new ‘big data’ phenomenon is in part about just keeping all the information we can and letting the computer figure out what is and isn’t important. This is not as easy as it sounds, and there’s a bit of a chicken and egg problem in that generalized, ‘unbiased’ algorithms still need to have certain parameters to be set, and those parameters are set by a human.

To bring this in a more critical direction, my skin usually crawls when arrogant literati like Jean Baudrillard say stuff like: “Like dreams, statistics are a form of wish fulfillment.” (First of all, nobody would tolerate a mathematician being so curtly dismissive of an entire intellectual endeavor – they would be quickly branded as “emotionless” or “cold” or whatever else people say when they don’t want to listen mathematicians and scientists).

Yet, sometimes, like in the case of the Toronto Maple Leafs, the statistics really are a form of wish fulfillment.

Anti-Buzz Software

Anti-Buzz: Operating Systems

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it:


newface-620x461The Buzz: Operating Systems are a brand, inseparable from the machine they are sold with.

The Anti-Buzz: Operating Systems are just mediators, and more interchangeable than you might think.

Operating systems: How do they work? What is their relationship with your hardware?

Where is the operating system?

The first thing that might come to mind for most people is that when they think of operating systems, they probably only think of two options: Mac and Windows. This is fine, but it does ignore the fact that your smart phone, (even if it is a “Windows” phone), has an operating system, as does your tablet and your Kindle. Of course, pointing this out only exacerbates the first common misconception I’d like to destroy, which is that the OS has anything to do with your hardware. If your observe that the iOS is only on iPhones, Mac OS is only on Macs, and Windows is only on “PCs”, you’d be forgiven for thinking that the OS was in fact part of the hardware itself, that it was encoded, at least partially, in the actual transistors of your machine.

No, all operating systems are 100% pure software, and they 100% live on your hard disk. They are just programs, not unlike all that other software you use. They are, however, incredibly reliant on the hardware they work with, so in some way specialized cases, such as your smart phone, both the hardware and the operating system were designed with each other in mind. However, computing hardware and software have become increasingly modular in design, so it is likely that all of these components can be made interchangeable, with some work, (That is, iOS on your PC, Windows 7 on your Kindle, whatever).

In fact we have seen this already to some extent, as it was widely publicized that a little elbow grease could get Chrome OS installed on a Nook Color, effectively turning it into a tablet at a fraction of the cost. In a brilliant stroke of 21st century thinking, Barnes & Noble responded by saying “Cool, we’ll just sell them this way,” instead of the usual, paranoid protect-our-product-from-our-customers that we are used to seeing from the entertainment industry, (I wouldn’t be surprised if, being a bookseller-that-survived-the-great-reckoning, B&N has a better set of priorities than many software companies).

Anyway, less surprising, (But maybe you did not know?), is that you can install Windows on your MacBook, and Linux too if you are so inclined. Some of you might be a little surprised to think of a machine with multiple operating systems, (you just pick which one you want to use when you boot up), but that should drive home the point that operating systems are just software, and all your things: photos, music, documents are just data. In my case I listen to the exact same mp3s when I am using Windows as I do when I am using Linux – it’s just data, and the operating system is just a portal to that data.

How does the operating system use the hardware?

Again, the OS is extremely reliant on hardware, most especially the CPU. A compiled OS will only work with one architecture, which was the real sticking point in the 90s…