Anti-Buzz Internet Social Media

Anti-Buzz: Public Journaling

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

Something very important: software metaphors.

Another piece of metaphorless software that you are likely familiar with, Facebook. Even with ‘book’ in its name, Facebook, and social media in general, don’t really look like anything you’ve ever used before. In fact, their ability to change their interface at will and still be Facebook is evidence that there is no real world template for what is happening online. You can see it as a completely new technology. Through the 20th century, nothing that resembled social media existed. We’re not talking about an electric typewriter or an information superhighway here, we’re talking about something for which there are no metaphors.

Most of what you read about the technology revolves around advice for leveraging it, (which is fine). Others are interested in trends. Not a lot of what you read tries to make literal sense of what the technology is. So, what, really, is social media?


Social Media is Metaphorless

What does that mean? First, it means that social media is completely new, not simply an old form of communication enhanced by the Internet’s connectivity. This, in turn, means that we don’t fully understand its potential, nor it’s dangers.

Second, the lack of metaphor contributes greatly to the sense of alienation it brings to some users, especially older ones. The generation gap between young Facebook addicts and their confused parents is in part created by the metaphorless interface. The same people who “get” email, word processing, and streaming television “don’t get” Facebook. The difference between an old technology revamped with modern enhancements and a genuinely new technology is huge.

Social Media is Public

Setting your “well, duh” aside, think for a moment how important it is for most people to feel like they are sharing an experience. One source of wedding planning agony for me and my fiance’ was, the choice of music during the reception. If your goal is to get everyone up and dancing, then good songs familiar to everyone are more valuable than better songs familiar to nobody. Familiarity is important. Sticking to music, many people just want to be aware of what the most popular music of the moment is, not because they even particularly like half of it, but because their familiarity with it will be shared with many other people. This familiarity produces talking points with both friends and strangers. Popular media in general maintains its popularity by promising a shared experience.

One of the biggest impacts of social media is that its persistent and public nature facilitates these sorts of comforting, shared experiences, but it does so without the expense of producing popular “lowest-common-denominator” entertainment to go along with it, (it has become abundantly clear that your average person can produce that sort of thing on their own). So, yes, obviously social media is very public, but the implications of this are not always taken seriously. If you’ve ever wondered about the difference between a Facebook addict and somebody who only uses it begrudgingly, you might consider it is not unlike the difference between somebody who listens to Top 40 Hits on the radio and somebody who could care less.

Social Media is Journaling

A common complaint I hear about social media is the quality of the content their friends produce. People who aren’t enchanted with the technology say things like, “I don’t need to know when my friends are at Starbuck’s” And it’s true; none of us need to know when anybody is at Starbuck’s. In my field I do occasionally come into contact with studies done on the content of social media, and if there is only one broad-stroked generalization to make, it is this: The vast majority of people who use social media spend the vast majority of their time journaling the mundane details of their day. The enormous bulk of all tweets are simply things like, “at work” “at lunch” “having a beer” and “thank goodness it’s friday.” All the minor goalposts of one’s day, summarized over and over again. As if I needed more validation of this, a friend of mine complained of the capriciousness of Facebook, saying he would post a nice photograph he took and get no response, but if he said he was eating lunch at some restaurant, he would get 30 ‘likes’.

Pooling the last two concepts together, what we get is that social media is really just a public journal. You write in your diary, but then you leave it on the coffee table and invite people to look through it. That’s social media. The catharsis of journaling, with the comfort and validation of sharing your experience. And yes, if neither of those things appeal to you, then the whole institution is going to look a little strange.

If I am to bring this back around to some practical advice: I was wrong so long ago. People will /totally/ become fans of their dentist. You might not generate a lot of traffic or interest simply by maintaining a Facebook page, but an active user will, when they come in for treatment, likely make a post or two about where they are. If they can link to you or your page, even better. If their friends kill five minutes by talking about dentists, you might earn a new patient. It costs you very little to simply make sure you have enough online presence to put an email address or phone number into someone’s hand. At minimum, social media allows you to maintain a magic billboard ad that will appear where ever people talk about you. Social media, and Internet connectivity in general, is about lowering barriers to information – so lowering the barriers to information about your business is a simple and easy extension.

Anti-Buzz General

Anti-Buzz: Communications Breakdown

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

Why do computers lie to us? Why don’t they always listen to us? Well, they don’t lie to us exactly, and they can’t really ignore us, but given that we are prone to take everything they do so personally, it feels like lying. It’s hard not to feel slighted when communication breaks down between you and your electronic vessel. Is it reasonable to feel so personally invested in our computing? I say yes.

I used to classify my time playing video games as time playing solitaire. I was, after all, alone. But in sense you could argue that, at the very least, I was competing against the design talents of the people who made the game. The best gaming experiences are the ones where you could feel some sort of implicit dialogue between yourself and the game designer. The same is true of reading books; you can call it solitary, but you are in some ways conversing with the author, provided they have suitably engaged your mind.

The same is true with computing. I spend a lot of time trying to demystify the apparent “intelligence” of computers and praise the real intelligence of humans, but I am admittidely swamped in these ideas thanks to my coursework and research. The truth is, for most of the time for most of the people, computing is more like reading a book or going to see a play; there is an implicit communication between the user and the creator. The inability of the computer to “know what you want” is, yes, a function of its non-existent intelligence, but it is also sometimes a failure of the engineer behind the software. Of course, trying to write software that works for everybody is like trying to write a novel that pleases everyone; the best you can achieve is popularity.

Given that personal communication is so integral to everything we do, (increasingly so now that it is become easier and easier to manage), I think we can learn a few things from the communications breakdowns we face with computers everyday.


Ambiguity and Trust

I think the only appropriate response to the preceding dialog box is “Help.” This is a cherry picked example, the result of my plumbing Google image serach for something suitably obnoxious, but dialog boxes are often ground zero for communications failures in computing. Worse is that user-studies show that most people nowadays just click-through these messages; and fair enough I say. This simple and effective way to prompt the user has been killed by overuse. It used to be that my most common computing advice to people would in fact be to click-through all these things. Neophyte computer users used to be so intimidated by the plethora of obtuse prompts that offered little in the way of choice or information that the only way to get people over the hump of tecnophobia was to encourage them to ignore all prompts.

Things aren’t as bad now, but we still run into the occasional choice between “Yes” and “Okay”, or “Yes” “No” “Cancel” or even a straightforward “Yes” and “No” but no question is asked.

What we can learn:

  • Don’t provide irrelevant information or prompt too often – you only train people to trivialize what you have to say and lose trust in your ability to communicate.
  • Offer clear options, and don’t offer too many – too many choices either obscures what is important, or makes people think you don’t care what happens.
  • If you have a question to ask, remember to actually ask the question – (People make this mistake more than they would want to admit – and then get frustrated when their concerns go un acknowledged).
  • Don’t require an immediate answer if it is too disruptive. (Computers are still bad at this).

Time Estimation

You would think computers would be better at this: estimating the time. We’re certainly bad at it, (or at least some of us are), trying to cram too much into one day, or not enough into another, showing up too late or too early. The world, despite it’s efficiencies, is full of these tiny mistakes. Your download will take 3 hours to complete, then 5 minutes, then 32 minutes, all in the course of one real minute. It seems a mechanical process: measure something, add the somethings together, combine them into an agenda – so why are computers as iffy as we are about this? The exact details aren’t important, but suffice it to say that if computers could ever know exactly how long something would take, they would get a lot more done than they already do. The same is true of ourselves, if we always made these guesses correctly, we would spend our time more wisely and get more done. However, time estimation is also about communication. The world is full of collaborators, and they all have to know how long the other is going to take. Shaky time estimates from computers might be the stuff of jokes for us, but they are still a crafted part of the user experience.

What we can learn:

  • Stay in communication when time estimates change. (But don’t over do it).
  • Err on the side of overestimation. Apart from the old trick of playing with expectations, it is better to accidentally have too much time then accidentally have too little. Overestimation is the better mistake to make.


It is amazing how consistent computers are, given that software developers can’t even agree on the best route out of a burning building. Imagine if you opened an application and the scroll bar was on the left, the window-close check box was moved to a bottom corner, and the “Edit” menu came before “File”. You aren’t stupid and you aren’t unadaptable, but that application would always feel difficult to use, despite no particularly bad design decisions.

What we can learn:

  • Communication is improved with consistency. Expectations speed up communication and understanding with less information.
  • Deviating from standard expectations has a cost, (But is sometimes worth it).
Anti-Buzz Internet Social Media

Anti-Buzz: You are the Cloud

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

The Buzz: A person is smart, but people are stupid.

I’ll openly admit that when I first came here I didn’t quite know what to tell you about social media. The past couple of years have seen business attitudes toward social media move from uncertainty, to frustration, to acceptance, if not complete understanding. Of course, there is a world of difference between what a 20-year old wants from Facebook and what AT&T wants. A simplistic view would be to say that social media represents public-private life, (in the sense that the interactions you have with friends in a theater or restaurant are public-private), and businesses should keep out, or at least not expect to fare better than a canvasser handing out pamphlets; but I think we already know this isn’t quite right either. I have an alternative view to share, given to me in a talk I saw last week: We are the cloud. Social media, or any other crowd-sourcing service that earns the participation of large numbers of strangers, is to people what the cloud is to computers.

So, that’s great. I’ve just explained a buzz word with a different buzz word, violating everything this column stands for. But no, seriously, let’s get after what I’m really trying to say.

Review: What is the cloud?

I don’t mean to talk your ear off about the cloud, at least not this week, but to cut away the generalizations inherent in buzz words, the driest, most straightforward explanation of cloud services is that they facilitate the deferral of computation. The cloud is about delegation and cooperation. If you need a lot of computational power now, and there’s a computer in the other room that isn’t doing anything, why isn’t it helping your computer get the job done? At it’s most ideal, the cloud is about infrastructure that let’s us all worry less and less about which computer is doing the work. It’s about leveraging their connectedness into cooperation, sometimes performing tasks better or faster than any single machine could.

So, now imagine that instead of talking about computers, I was talking about people.

The Internet is a Cloud of People

Forget the latest captioned cat photo or how many likes your practice is getting and instead consider the role played by social media in the Arab Spring, and the role it will play in any other oppressed part of the world. Consider how difficult it is to be anti-democratic in a world where everybody talks to each other, (talk being the most democratic institution of all). Admittedly, this is the heavy, dramatic, (and easy to understand), view. But understanding the impact of our connectedness begins with appreciating the power of putting mass communication in everyone’s hands.

Consider now crowd-sourcing efforts like Kickstarter, which have essentially made anything self-publishable. Like a good medicine show, there are hucksters and people who are otherwise undependable, but the story of crowdsourcing is mostly one of success. Products and projects that were not possible in the pre-Internet world are now becoming commonplace.

So, think of the cloud analogy again and remember that the ideal is to do things faster and better than any one of us could on our own.

Working in science, I am reminded of this all the time, as many efforts are helped along by citizen science – the practice of letting large numbers of amateurs gather data. Like the cloud is many processors, each with a small part of the whole problem, citizen science gives, say, every bird watcher in the world a small job to do.

As we have changed how we do things over the past decade – doing things “the Internet way” – a common concern has always been the caliber of the average person. The assumption was that blogging couldn’t challenge news media because it was composed by common people. The assumption was Wikipedia couldn’t work because there would not be enough interest, and it was too susceptible to vandalism, (and we all know how keen on vandalism your average joe is). The assumption was YouTube would never produce anything valuable, that the cacophony of new media would drown out good taste and that most regular folk were just not discerning enough to find and support the diamonds in the rough.

Important leaps in society are never founded on the belief that the common person is terrible.

What’s more, some research suggest that connected groups of amateurs, (“turkers” in this case, named for Amazon’s Mechanical Turk marketplace), can actually outperform experts. And if you are dubious of the idea, consider that Wikipedia is the greatest, broadest, most carefully edited encyclopedia in the history of the world, largely thanks to its disassociation with any governor-experts. Consider that twitter is much better at informing you about earthquakes than traditional media ever could be.

If you’re a cynic, then the confounding reality we are facing is that highly connected groups of strangers are, in aggregate, capable of more intelligence and productivity than our best individuals are. And where two decades ago some were predicting an intelligence singularity, a sort of apocalyptic moment where humanity was permanently outclassed by its own technology, recent trends are beginning to suggest that, if anything, we might be headed for a human singularity where we learn just how much we really are capable of.

Anti-Buzz Tablet Computers

Anti-Buzz: It’s so easy.


Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.


The Buzz: Tablets are going to kill the traditional PC

The Anti-Buzz: No, it’s true.

If you are my age it is inevitable that the topic of kids and tablets come up; because you have kids and you have tablets, or you have friends who have kids and tablets, or you have friends who are disgusted that they have friends who have kids and tablets. Invariably, late night conversation turns to some anecdote about a 1-year old teaching himself to watch his favorite videos on his mom’s iPad – which I’m not so sure how amazing this is since when I was very very tiny my top priority was figuring out how I could watch Star Wars in a continuous loop.

Opinions on the matter vary, and are conditioned on how old the kid is, and what they are allowed to do, and whether or not they are reaching for their parent’s tablet or their own. Studies on the matter are nascent and inconclusive. If you’re keeping score, this is also the first time people of my generation are revealing themselves to be old-fashioned kooks. We are inherently biased, for lots of reasons, against any child having something we didn’t when we were that age. There are numerous legitimate concerns in regards to kids and tablets, but part of why it feels wrong is nothing but in-my-day, uphill-in-the-snow-both-ways, cane waving.

I can’t and won’t expound too much on child psychology, but this phenomenon of toddlers who are unbefuddled by tablets can lend some insight into why the devices are so popular. To wit, they’re so easy, even a baby can use them. That was once a figure of speech. With tablets it’s actually true. Exploring what that really means breaks down to two things: 1) the interface is friendly to illiterate people and 2) the interface is highly intuitive.

When I Googled “why do we like touchscreens?” (admittedly a bad query) I got these results; essentially that is just a list of articles written by cranky 30-year olds waving their canes at touchscreens. They are wrong. All of them.

Much is being made of how mobile devices are killing the PC market. And I say that it’s true, but not so much in the sense that most hype dealers want to spin it. Yes, Intel’s sales are down. Yes, Internet traffic is increasingly mobile device driven. But saying that the PC is dead is like some fashionista declaring the end of pencil skirts or this guy declaring that this bar is over. It’s a hip, wild thing to say, but it belies a certain emphasis on the popular.

I say it’s true enough that PCs are going to die as a casual computing platform. The influx of new PC users that were encouraged by the convenience of the Internet, (but only the convenience of the Internet), don’t need a whole danged computer. Complex work still warrants a full computer, and always will. At the very least, the infrastructure and software that makes your mobile experience so convenient, the graphic design that makes it so colorful, the video editing that makes the content you love possible, these all need powerful machines with complex input modes. The PC isn’t going to die, the popular kids are just done inviting it to parties. From the popular kids’ point of view, this is the same thing as being dead, but what do they know?

In a way, things are returning to how they used to be. The tactile world of the tablet makes your experience less like a computer and more like fussing with paper and maps and real physical objects. Using a tablet is in some ways like doing things you used to do before tablets. The PC, meanwhile, is returning to the loving embrace of the dedicated computer nerd, and I’m sure that, quietly, the computer nerds wouldn’t have it any other way.


Anti-Buzz: Traveling Salesman (Again)

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

So here’s something fun.

I’ve talked about the travelling salesman problem before. On one hand I should probably be happy that a serious Computer Science topic is given its own movie and sexed up for the general public, but I will be honest, this trailer makes me cringe.
This is anti-buzz after all, so let’s get to the task of dispelling the untruths and misdirections.
My first thought upon seeing this trailer was that it was going to unleash a new army of trisectors. What is a trisector? There is a famous old essay popular among mathematicians: What to do When the Trisector Comes? You do not have to be very math literate to appreciate it, and if you chose to just read that article instead of mine, I would not be offended.
In short, a trisector is a mathematical crank. A non-mathematician who for some reason gets it in their heads to either solve some daunting, open math question, or worse, one who attempts to ‘solve’ something mathematicians have already proven to be impossible. ‘Trisector’ is referring to someone trying to trisect an angle using only compass and straight edge, something the ancient Greeks proved impossible, and something that is largely irrelevant today because we have means of trisecting angles with better-than-ancient-Greek tools.

This trailer exaggerates the impact of finding an efficient solution to Traveling Salesman, but what upsets the crank-wary part of me is that it’s not entirely based in fiction; indeed a simple explanation of what’s really at stake does sound a bit like mathemagical sorcery: If an efficient solution to any NP-Hard problem, (Traveling Salesman is one such problem), is discovered, via reductions, this would in turn solve every other NP-Hard problem. Many NP-Hard problems are practical and have real-world implications; so the impact would be real.

The real scientific issue at stake is the question of P versus NP, two classes of problems that have different definitions but, maddeningly, nobody has proven are not equivalent. If they are not the same, then there are no efficient solutions for the NP-Hard problems, and if they are the same, then there are. While most scientists believe that P does not equal NP, the implications of them being the same are attractive to cranks: even now mathematicians are flooded with alleged proofs resolving the issue.

newface-620x461However, in the context of P versus NP, ‘efficient’ is a very loosely applied term. The layperson’s idea of computational efficiency is far more demanding than the bounds of poly-time algorithms. To put it in everyday terms, the difference between P and NP-Hard is the difference between years and centuries. Consumers want things done in less than a second. ‘Efficient’ solutions to Traveling Salesman would have a broad impact, but they would not lead to ‘everything can be done in a second’, but any other definition of efficiency means nothing to a consumer.

This misrepresentation of computational efficiency is a chronic problem; the average person has no concept of what really hard problems look like. The trailer suggests the film has no desire to rectify this, as the analogy used in the trailer once again misrepresents the issue. Finding a coin in the Sahara is not a complex problem. It is only time consuming because there is so much sand, but the solution is simple: look through all the sand. The ‘glass’ solution eliminates one dimension – an improvement – but we were already well within the bounds of efficiency, by our loose definition of efficiency. The really hard problems are not ones with lots of sand, but ones that are difficult even when you have very little sand to look through.

I’m also not sure why the trailer has the image of a drill entering somebody’s head. I don’t see how that becomes more efficient as a matter of P versus NP. We really don’t need to be giving the cranks more to be excited about. They already think what they are doing is magic.

Exaggerating the impact of traveling Salesman is forgivable: it’s a movie. Less forgivable is the anti-intellectual bent it has.

The trailer takes a turn for the uncanny with its use of the word ‘simplified’ presented not unlike some high-tech ad, preying on everyone’s fear that high-tech companies are luring us to our doom with their veneer of nice, simple, consumer-friendliness. People want to hate Google and its ilk for the influence and power they, even though they’ve pretty much only ever used it to make our lives better. People want to believe in stories that say they shouldn’t trust that.

I would rather we weren’t so cynical, but that is a tall order as long as computer science remains mysterious and magical to the average person.

Anti-Buzz Internet

Anti-Buzz: Judo


Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

When I’m thinking about modern technology but I’m not waxing philosophic about it here, I’m usually thinking about digital products – music, movies, photos, games, software – and the economic reality behind them. Occasional soapbox moments aside, I don’t delve into these topics much here because it has seemed to me that the subject wasn’t relevant to my audience. I also avoid the topic because arguing about intellectual property rights and effective business models in the digital age is pretty much Internet Arguments 101. There is a whole universe of discussion on the matter of piracy and the impending paradigm shift in notions of intellectual property, and talking about that stuff is pretty much the easiest way to make an adult look like an idealistic college kid. When I sit down to write these articles I make a good faith effort to say things that nobody else is saying.

But really, the reason I don’t talk about piracy much is because I know you aren’t worried about it. People don’t pirate dentistry. You perform an expert service, and that can’t be faked or copied. On the surface one expects your tech interests to lie mostly with how your day-to-day operations can be changed for the better, and secondarily, the subtleties of what those new technologies mean, (such as how less time needed to manage your appointment book means that your receptionist takes on new responsibilities … or fewer hours).

And all this build up isn’t to justify some rant about how I feel the 21st-century-should-work and what-you-should-think-about-digital-goods. Far from it. It’s rather that I realized recently that my opinions about all that stuff behind the curtain is more applicable to you than I thought it was. Without engaging in college-kid diatribes, here are two bits of advice from the piracy battlefield:

Don’t Make it Hard to be a Customer

My favorite thing about Steve Jobs, (and understand that while I lionized him here, I am not without criticism of the man), was that he didn’t treat his customers like a bunch of selfish thieves. We live in a world were it seems every large company invests in some large infrastructure that is meant to make sure you can’t possibly steal their digital product ever. In an idealistic crucible, Apple is hardly innocent of this, but I do remember Steve Jobs explaining the decision to make iTunes with, “People don’t want to break the law.” He made iTunes in the face of Napster because he knew that the convenience of downloading music was more important than whether or not it was free. Again, iTunes itself isn’t above reproach, but Apple’s popularity over the past decade can be explained by its commitment to making the customer’s life easy. Microsoft makes you feel like a thief. Apple doesn’t.

A common mistake in the fight against piracy is that companies make it more difficult to be a legitimate customer. Don’t do this. Ask yourself, how easy is it to be one of your patients? How easy is it to find you online? How easy is it to make an appointment? Can appointments only be made over the phone? If so you are losing new patients to every dental office that let’s you make them online. Don’t believe me? Ask yourself how excited your are to be on the phone with a stranger right now. Ask yourself how a practice that only takes new appointments in person competes with a practice that takes new appointments over the phone.

Ask yourself how long customers spend in your office outside of actual treatment. How long do they spend filling out paperwork? How long does it take to schedule their next appointment? How long do they wait? /Don’t/ make it hard to be a customer. A lesson from the piracy battlefield is that people will choose what is more convenient over what costs less money. Trust me. Be convenient.

Another way I like to put this is: Don’t put yourself between your product and your customer. If you are politically minded, you can equate this to [not putting a bureaucrat between you and your doctor] or [not putting a bureaucrat between scientists and their science], whichever line you are willing to listen to.

newface-620x461Don’t be a Jerk

Which is the politest I can put it. Another audience gets crasser words. An extension of the previous bit of advice is that being likeable is more important than being secure, (I think there was even a movie about this). People give money to things they like. If 20th century advertising taught us one thing, that’s it. Counterpoint to large companies making sure you can’t not spend money, is small companies, (observation: you are a small company), ensuring that you want to spend money, which is arguably more important. Again, I’ll spare you the college-kid anecdotes, just take the advice to heart; and again consider Apple, who wins by convincing people to like them.

But what are you doing to be unlikeable? That’s hard to say. There is a fine line between being lenient and easy with customers, and indulging them beyond feasibility. A good many entrepreneurs makes of point of pleasing the customer, but eventually you draw a line that protects your own interests, and where that line is is not obvious.

So consider this instead. Why should you care about being likeable?

I said you weren’t worried about piracy. I was right only in a literal sense. Where digitals goods providers are “freaked out” by piracy, you are “freaked out” by something else.

You know what it is.

Social media.

Where the music industry is concerned that somebody, somewhere, might download an mp3 without paying for it, you are worried that somebody, somewhere, might bad mouth your business for no good reason. You can’t stop all of the pirates any more than you can stop all of the cranks complaining about your business. But you can minimize. You can discourage negative behavior by being positive. I don’t have concise advice for how to be likeable, but at the very least, don’t be a jerk.

The metaphor is Judo; using your enemy’s momentum to your advantage. You can put effort in leveraging social media directly, but more importantly, you just don’t want to get crushed by its weight. The highly connected opinion-of-everybody is a tremendous force you can’t stop. Don’t be a jerk. Don’t give it anything to complain about.

Anti-Buzz Internet

Anti-Buzz: Anti-Internetism

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.


newface-620x461The AntiBuzz: Anti-Internetism

—About ten years ago I was studying film in college and we were lectured on how ideology can be encoded into art. “Ideology” is itself a tricky thing to define and trying to do so here could lead us too far astray, but one salient point was that ideologies seek to become invisible – that is, you are typically unaware of them. Successful ideologies feel like scientific law, their veracity so clear that you no longer want to call them ideologies, “ideology” being some word used to describe the culture and morals of those foreign to you, or those who hold positions you don’t like.

The naturalized nature of ideology is what allows fiction to make heroes and villains without having to explicitly define them as such. When a book or film is “trying to say something” and says it a bit too hard, we criticize the work for being pedantic, or preachy, or sappy, but what we are really reacting against is the uncomfortable visibility of the work’s ideology. When we have trouble relating to something foreign, it is in part because we are not fluent in the core ideologies that the work is built on top of.

The real key, however, is that you can go a long way in analyzing a piece of art not by looking at what it says, but by looking at what it assumes.

—About three years ago we all saw the first wave of bankrupt newspapers. This was on the heels of the 2008 financial collapse. For legal reasons, I think it best I just provide links rather than reproduce the images here, but consider this political cartoon from April 2009. Also, in accordance with my non-confrontationalist policy, I will tiptoe around the details of the financial collapse, but the underlying belief of this cartoon is that newspapers are a “scientific law” constant of our society, that they represent the public voice and the public discourse, and that their failure is the harbinger of a new voiceless American public. The argument of the cartoon is that the flailing newspaper industry is another piece of the big “injustice” pie. Considering that I’m basically just some guy and I get to blab at you each and every week, I’m not too sold on the idea that bankrupt newspapers are impeding my “voice”. I think we all understand that newspapers in general are not a sustainable business, and that a financial crisis only hastened their demise, but they were already doomed anyway.

Here’s another cartoon from the same era. This one is much crasser. Where the first cartoon is just a case of misguided mourning, this one seems to imply that my generation, by not having attended the right college, is a class of thoughtless dunderpates. But that’s what it says. What it’s saying is pretty clear: Internet culture is neolithic. But what it assumes is what is more telling: that shuttering public discourse behind the filter of “publishing standards” made for better public discourse and that a new society that enjoys freedom of speech in both word and deed is going to be besieged by blithering idiots. It’s ivory tower cultural elitism, the odd 20th century belief that anything available to the masses must be low brow or dangerous.

—About a year ago I was returning home from Europe, and the flight was sufficiently long that avoiding the in-flight entertainment was not an option. As sort of a guilty pleasure, my fiance and I watched several episodes of a new police procedural drama. This was as close to “normal television” as I had gotten in a long while. It was clear the show was aimed at people 20 years older than me, but what really stuck out were the underlying beliefs about the Internet. Bloggers are unironically ridiculed at press conferences. Police Commissioners agonize over social networks spreading public panic and otherwise irresponsibly handling delicate information. Villains record their crime sprees and post them on YouTube. Other villains learn how to make bombs from the Internet. The Internet is a big problem for law and order. This was one of my first inklings that traditional media was being overrun with anti-Internetism.

I’ve seen it elsewhere. Talk show host Craig Ferguson likes to say that Wikipedia is “wrong”. Aaron Sorkin’s drama Newsroom has a token Internet journalist who is routinely ridiculed for taking any non-traditional media seriously. Most recently, I watched an episode of Saturday Night Live where a Weekend Update segment spent too many minutes on a gag about rude social media users being taken just as seriously as traditional sources, (“too many minutes on a gag” also being a good summary of Saturday Night Live). I’m sure I could fill up a book if only I had the stomach to watch more traditional television. I already encounter the anti-Internet ideology far too often considering how little television I watch.

I’m ranting about it here largely because I feel the sentiment is completely naturalized – that is, it is “invisible”. I think one could watch a lot of television and not “get” that nearly every show takes a stab at Internet culture. I’m hoping that by pointing it out, it might become more visible.

—This week, in the United States, we celebrate Thanksgiving. I should tell you here that, among other things, I am thankful that we are moving closer to a a true freedom of speech, where everyone has not only the right but also the means to voice their opinion. I am thankful that the cynics are wrong, and that a typical user can patrol the cacophony of content and make responsible choices. I’m thankful that the “Lowest Common Denominator” is just an ugly myth perpetuated by a generation of over privileged media producers. In short, I am thankful for the Internet.


Anti-Buzz: Corollaries

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.


newface-620x461When was the last time somebody called your office and asked for directions? When was the last time a patient showed up late, (or not at all), because they got lost? Not everyone has a smart phone or GPS and, (somehow), not everyone can look up directions from home, but by and large, these scenarios seem like an absurdity. Not that it proves anything, but when I Google “asking for directions” I get a bunch of pages trying to teach you English; I count this as evidence that asking for directions is a fading institution.

Getting where you want to go has a lot less friction than it used to, and that benefits everyone. It used to be that if you were running errands and you wanted to tack on an extra stop to a new place you had never been to, that this was potentially a huge hassle. How do you get to this new place? You could hope somebody else knew the directions, or if you had the address you could look it up on a map, (a real map – the kind that doesn’t know where you are or plot your course for you). And if you had neither of those things? You could call the new place and ask. But how do you call them? You hope your first errand stop would let you use the phone. And you’d hope they had a phone book too. Or you’d find a pay phone. And even then you might still end up having to orient yourself with a map. Or you’d just give up and go home.

The people who put effort in improving telecommunications weren’t trying to make it easier for you to run errands, but they did. And every business that might have lost a customer to the uncertainties of the above process no longer does. The improvements in communications technology were also not intended to help drive customers into stores, but they did. The true impact of technology is often corollary to the original goal.

Ideally, your patients only visit you twice per year, so you benefit from this “never lost” effect less than others, but even so you are spending less energy offering directions and phone lines, (and phone books), to lost patients than you used to.

The lesson is that the benefits of technology are not always straightforward. This is why it can be problematic to look at something new and demand to know how it benefits you. Social media, as usual, is the go-to example here; trying to anticipate its impact is just as tricky as anticipating anything else in the past 30 years would have been. The last three decades have been full of the unintended benefits of frictionless computation and communication. Even the visionaries didn’t completely see what was coming.

There is an element to modern media that do their best to punditize technology, to throw around news and tech predictions with some amount of swagger. It is the same swagger that fills any pre-game show. It is the same swagger that elements of the media had back when “computers” and “fad” could be uttered in the same sentence. It is the swagger of false expertise, of understanding the status quo and trying to extrapolate from it. For my very small part, I am a part of this media too. I do my best to talk about popular technology in an intelligent way, but when it comes down to it, I’m merely guessing, just like the rest of us.

There is the very old saying that necessity is the mother of invention. If you were uncareful, you might think the last 30 years runs counter to this. Who “needed” Youtube anyway, right? But consider the add-an-extra-errand debacle described earlier. Nobody in that situation assumed there was a solution to the problem. More accurately, nobody in that situation assumed there was a problem at all. It was just the way things were. But as soon as that process began to lose friction, people latched on. There was a need driving the innovation, it just wasn’t obvious. And that’s the real lesson and the real source of innovation. Do not look at something new, tap your foot impatiently, and ask what it does for you. Instead, find more problems. Look at your processes and policies and ask where they can be better. What is your least favorite part of your week? Can you make that better? Solve the problems you didn’t know you had and, perhaps corollary to solving them, you will discover the benefits of a new technology.

Anti-Buzz Security

Anti-Buzz: Patient Records Revisited


Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it.

BIG NEWS, We may have to wait a bit longer for some new Anti Buzz articles as Andrew just became a new father. Congratulations. That means that I just became a Grandfather. 🙂

Anti Buzz: Privacy in the Internet Age:

newface-620x461The privacy conversation has more legs than I anticipated, having already found myself exposed to many stories and opinions on the subject. In early 2014, “predicting” that privacy is going to be a big deal in the near future was a safer and simpler guess than I would have anticipated. But this is good for me, and for you, because I have a lot to say on the subject, and dentists have more at stake in the conversation than most people. Expect more privacy-centric conversations in the future here. This week: a practical map of what the concerns are for your practice.

First, a lot of what I will say today isn’t really new to this blog. My father has discussed electronic dental records many times before, and I’ve chipped in with my own perspective. My father was keen enough to the ambiguity of “ownership” before it was popular discussion. And most recently, of course, is HIPAA and what it could mean for you. In short, the records you keep on your patients are a hot commodity.

As somebody who increasingly fancies himself a scientist, I am very sympathetic to the arguments put forth in this TED talk – briefly: We stifle innovation by limiting access to patient records, yet this flies in the face of conventional wisdom and ethics. It is highly unlikely that your patient records are the key to curing cancer, but the truth is that we don’t know what innovations we are missing by keeping things locked up. This much should be easy enough to convince people of by now as the conventional wisdom has shifted far away from technophobia’s famous “Everything that can be invented has been invented” attitude.

The question is, of course, if the benefits outweigh the invasion of privacy, but I don’t actually presume to make up your mind about that. I do presume to tell you that you are going to need to take a position on the matter before too long. I am perhaps getting ahead of myself here. Let’s walk through why your patient records are important, and to whom.


Anti-Buzz: Driver’s Arithmetic

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it. This article from early 2013 0n driverless cars is even more relevant today.

BIG NEWS, We may have to wait a bit longer for some new Anti Buzz articles as Andrew just became a new father. Congratulations. That means that I just became a Grandfather. 🙂

The Buzz: A computer can’t possibly account for all the variable in driving a car!
The Anti-buzz: Driving is only arithmetic.

Early in the year while we are still looking forward, I’d like to talk about another emerging technology – automated cars, (read up if you are unfamiliar with this phenomenon). I’m not going to pretend to make a claim about the development and adoption of this technology – 2013 certainly won’t be the year of the driverless car, and I don’t know which one will be – but the successes of 2012 give me an opportunity to revisit why computers are good for some things and not for others – and perhaps contrary to your intuition, computers would make much better drivers than we ever could. I am always eager to remind you that machines are not in fact very intelligent and people, despite what pop-cynicism might have told you, are really smart. The twist-ending to the story is that driving does not actually require much intelligence.

Driverless cars are one of those “magic” technologies that are either in demand or on the immediate horizon, yet I think its prospects are better than many others. As a foil to driverless cars, let us consider a competing “magic” technology: Voice recognition. From a lay perspective, both of these technologies might seem similarly complicated. In fact, voice recognition might seem simpler than driving a vehicle; after all, kids learn to communicate and talk long before we trust them with a car. The technologies face quite a different set of challenges, not all of them technical, but understanding these differences can be illuminating. I think it best to proceed by answering a string of hypothetical questions.

If voice recognition is so hard, why are we already using it? The answer is that there is a lack of risk. The biggest impediment to driverless cars, really, is that the stakes are so high. If a robot car fails, people die. When a customer service phone tree fails to understand what you are saying, it is a mild annoyance. What voice recognition we do use is unfettered with safety concerns. If you want some machine learning jargon, a problem domain includes a loss function – that is, a way to score success and failure in a way that fits the real world. The loss functions for these two problems are very different. Voice recognition that works 98% of the time is very impressive. A car that makes the right decision only 98% percent of the time is life-threateningly terrible.

Why is driving an easier problem? More pointedly, why did I say driving does not require much intelligence? People don’t want to hear the latter because driving is something they do, and it is not easy. However, people aren’t stupid because they can’t compute a square-root faster than a computer. People aren’t stupid because they can’t store a novel in their head and reproduce it word for word, (but your e-reader can).

The thing about driving is that it is a very well-defined process. For basic operations, there are clear laws, and a very well codified set of symbols that indicate all the special rules governing any location, (Stop signs, traffic lights, white dashed lines, yellow dashed lines, solid lines, lane markers, etc). For the parts of driving that require on-the-fly thinking, it is still mostly a matter of observing objects, calculating distances, “looking afew moves ahead” and avoiding collision. Driving is, in many ways, just a tedious math problem. Doing it safely requires discipline and focus and mental endurance and the ability to not let your mind wander.

If, instead of putting yourself behind a steering wheel, you put yourself behind a notebook and had to scribble out the answers to arithmetic problems as fast as you could, you wouldn’t think twice about letting a computer take over. In contrast, voice recognition suffers from all sort of inconsistent noise. Instead of the regularity of road signs and the simplicity of measuring how far away the next car is, voice recognition has to deal with the fact that people mumble, that people have accents, that they speak different languages, that they sound different in different emotional states, and that each person has a unique voice. This says nothing of the task of actually understanding language. Language is a more idiosyncratic, intuitive, /human/ thing. It’s not so easily codified.

Would a driverless car really be safer? I think so. The reason why really illuminates the difference between what computers are good at and what humans are good at. A robot car would not only know that there was a car ahead of them, but exactly how far away it was, how fast it was going, exactly how long it will take to brake to stop. A robot car would never be bored, have to sneeze, or otherwise take its eyes off the road. A robot car is looking ahead of itself and behind itself at the same time. If the robot car is common enough, it’s talking to all the other nearby cars. Now it no longer has to guess about the car ahead of it slamming its brakes – it would be told directly. The computer would be privy to information a human never would, and it would enjoy the ability to “look” in all directions at once and never be distracted. It realy is just on-the-fly number crunching, and that is not something humans are good at.

If the problem is so simple, why hasn’t it gained speed until now? How long until it is commonplace? All of the above shouldn’t trivialize how amazing the current technology is. First, a lot of what makes driverless cars possible is the ability to give a computer the same sensory facilities as a human. This is no small task, nor is it particularly affordable right now. I downplay the “intelligence” required to drive, but the AI technology needed to successfully “see” things like road signs and painted lines is definitely not trivial. The second question is much harder. I’m very optimistic about the viability of the technology, but less so about its ability to become a consumer product. The day will come, but there are two huge hurdles. One is public opinion – it seems very mixed right now. If too many people refuse to trust the technology, it will be a struggle for it to succeed. The other is legal. Even just the question of liability is enough to delay the technology for perhaps decades – when a robot car kills someone, who is at fault? Driverless cars have been legalized in three states already, but the current law holds the “driver” responsible – the car can’t be unmanned and it is assumed the driver is alert and ready to take control whenever it wants. The dream is to be able to surf the Internet or read a book or do whatever you wish while your car chauffeurs you around.

That last reality might yet be far off, but it wouldn’t be for a machine’s stupidity, I can tell you that.

UPDATE: Related

Anti-Buzz Internet

Anti-Buzz: The Bicycle Super Highway

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it. BIG NEWS, We may have to wait a bit longer for some new Anti Buzz articles as Andrew just became a new father. Congratulations. That means that I just became a Grandfather. 🙂

The Buzz: The new always obliterates the old.
The Anti-Buzz: The old builds the house that the new enjoys.

newface-620x461Cars changed everything. Nowadays we indulge in conversations about fuel efficiency, alternative energies, or just going green and biking to work; all fine conversations to have but maybe it can be too easy for any car cynic to remember just how revolutionary they were. One of several modern transportation inventions, the car stands out over trains and airplanes because it is small, personal and democratic. I can ride a plane from Los Angeles to Australia, or take a train from Manhattan to Vermont, but I can drive my car to the grocery store, the theater, my friend’s house, the woods or anywhere I want. With a car person’s range increases tremendously, and the reach of businesses explodes. We say the Internet is the most important invention since the printing press, but don’t forget that early Internet metaphors referred to a Super Highway. If we want to say the 20th Century ended with the fall of the Soviet Union, (as some historians would have you do), then we can safely say the advent of automobiles, roads and highways was the Internet of the 20th Century.

Now, I’m not here to laud car usage for 1,000 words, nor am I here to ruffle the feathers of any bicycle enthusiasts, (of which I am one). No, the next observation is to consider the state of car-alternatives. For brevity, I will focus on the bicycle. Now, as a technology, bicycles owe nothing to cars. They predate cars, and borrow no aspects of the internal combustion engine, unless you wish to argue for vague notions of “modern engineering.” They even enjoy the same destination-versatility, (like cars, they take you “where ever you want to go”). However, in as much the bicycle performs as an effective mode of transport, it owes almost everything to the car: the paved infrastructure, the greater accessibility of businesses, the availability of bicycle parts and repair services – all of these are pushed by the automobile. The bicycle is better off than it would have been had we misfired on the rise of the automobile.
Perhaps the analogy to cars and bicycles doesn’t resonate with you, but more than likely the idea I floated last week about social networks supplanting your intra-office communication does. The primary contribution of revolutionary technologies is that they drive new infrastructure, and if you’re the sort who still scoffs at social media, you are ignoring that its immense popularity is shaping the infrastructure of the future. It’s reasonable that your present concern with social media is how you can leverage it as a business tool, but eventually the popularity of social media will be less important than the communication motifs it has helped create. It just might be that trying to connect with millions of facebook users is a waste of your time, but it is not absurd to think that we might repurpose the “what” of social media: public visibility of activity, public “wall” style communication, ease of embedding media, topic tracking, private communications both live, (chatting), and asynchronous, (messages/emails) – these are all sensible business technologies.

If you are a tech trend geek, then you would do well to look for these infrastructure moments – and be wary of trend claims that ignore this reality. My favorite among the latter is “The PC is dead”, bandied about in the wake of smart phones and tablets. These micro-psuedo-computers are certainly “the future” but touting them as PC killers is not unlike touting the bicycle as the automobile killer. Tablets are more affordable, better in some ways, and also less powerful and more narrowly designed. Smartphones are great, but they are the bicycles of the Internet. If everybody switched to bicycles tomorrow, we would still be wise to run this nation’s freight on the back of an internal combustion engine. Even as we see “mobile” versions of websites, (mere bike lanes), the PC still drives the infrastructure, and will continue to do so for some time. Thinking we are done with real computers because we have tablets now is sort of like saying we don’t need scientists any more because we have Popular Science Magazine.

I digress somewhat. The importance of something like a tablet is that we have more Internet users because we were able to give them a less obtuse computer. Computers are “for nerds” and somewhat correctly they will continue to be so – regular people can enjoy a smaller, pop-computer that specializes in the sort of things a regular person wants to do, (And it is too bad the car/bike analogy breaks down here). Tablets are great, but they are only as good as they are because they can use the infrastructure built by the computer.

Anti-Buzz Future Tech Hardware

Anti-Buzz: Stylus Pens

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it: This one from last year is interesting considering that one of the major selling points of the new SurfacePro 3 is the stylus.


The Buzz: A tech prediction! The Anti-Buzz is going to make a tech prediction!
The Anti-Buzz: I predict that stylus pens are going to get really popular.
The Buzz:

newface-620x461Groan, I hear you say, that’s a boring prediction. I admit that it might only seem bold in as much as it is very unexciting. Stylus pens? you ask, Why would you care about stylus pens? And they already are popular among among professionals trying to keep up appearances while they play Angry Birds at lunch. And they’ve already been popular among smart phone users, particularly Blackberries. So, yes, my prediction needs to pack a little more punch.

How about this? : My assertion is that stylus pens will be the focal point of the next big interface improvements. In a few years you won’t be able to work your iPad without one. And you won’t want to.

So I really am sticking my neck out here; things might not break this way.

We are at critical mass on touch interfaces, so much that even Microsoft is trying to push them for /desktops/. If I was Grand Czar of the computing industry and, several years ago, had been tasked with trying to get more computers in the hands of “technophobic” casual users, I’d still be shoveling netbooks into the fire and hoping.

I wouldn’t have thought of going sideways and pushing tablets. Ever. But somebody, somewhere, saw smartphones, saw the lukewarm reception to netbooks, did one of those cartoonish double takes where they go back and forth. Smarthpones. Netbooks. Smartphones. Netbooks. Finally – eureka – they understood the truth and their finger shot in the air: “People really hate keyboards!” And I think “not having a keyboard” really is the magic behind the tablet boom. Losing the keyboard wasn’t some compromise consumers made so they could have tinier computers. Losing the keyboard was the selling point. I think for a lot of people, keyboards create stress. They are just a lot of buttons and space.

Indeed, we’re experiencing a new high in pop-computing. The kind of person who was comfortable with computers – would use them at work, might use them at home, check their email, read a little news – was still not among the rest of us geeks who would put ourselves in front of a screen for every waking hour if we could. Now everybody’s in front of a screen. And most of them are touching it.

But something has to break the other way. These new touchscreen interfaces are refreshing, and made using a computer feel like a day under the cabana, but the vacation has to end eventually. The further tablets penetrate the consumer computing market, the more they are going to be expected to help us get our work done.

touch_stylusEnter the stylus pen. Right now, the stylus has a narrow use, its acknowledged function being that it offers more precision for a touch interface. This application is obvious on smartphones, which often sport them to match their smaller screens. But as applications become more complicated, (and serious), there has been and increasing need to move away from the touchscreen’s usual array of fat, comforting buttons. This is the easiest selling point: stylus pens allow for more complex displays and more precise interactions.

But this isn’t all about precision. We also lost the mouse in this shuffle. Easy, cabana-worthy computing doesn’t miss the mouse at all, as it is pretty much replaced by the touchscreen, but there are a few quiet advantages a mouse still has, the biggest one being that it has buttons, (though admittedly one of those buttons simply approximates “touch here”). The stylus pen will replace your finger because we can install buttons and other features into a stylus pen and we can’t do that with your finger. (It’s already happening with Samsung’s Galaxy Note series). Feature-rich stylus pens are in their infancy and I won’t wager on what the final iteration is going to look like, but I can speculate here. In addition to straight forward buttons they might react to you squeezing them, or maybe the back end will have a button to click, (like you do to use a real pen), that will put them in a different mode. We might even see the addition of pressure sensitive surfaces, adding even more variety to our ability to interact with a computer through a touchscreen. Regardless, somebody better at this than me is going to figure out the best way to use the real estate of a stylus pen, and it’s going to make the current run of tablet computers look archaic.

Plus, we can finally stop smearing the grease from our chicken wings all over our screens. That will be nice too.

Anti-Buzz Social Media

Anti-Buzz: Broken Windows

newface-620x461Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it:

No, not that Windows; these. The broken windows theory of crime is not something I would profess to be an expert on, but the idea is applicable in matters of professionalism and technology. My layman’s summary is that most crime is opportunistic, and will be more often found in situations where there is a perception of low enforcement. Broken windows beget more broken windows; their presence signals that “nobody cares” and so the next would-be vandal is more likely to take the risk. Vandalism begets more vandalism, graffiti more graffiti. A disorderly neighborhood feels less safe, making residents more fearful, and as Yoda has assured us, fear leads to hate, hate to suffering, and et cetera until we cross over into the Dark Side. The argument is that keeping things clean and orderly goes a long way to preventing crime.

Even in lighter matters, we are inclined toward a similar ‘community standards policing.’ I suppose it doesnt really matter how u spell it or if you dont remember some punctuation or maybe youre sentence runs two long. Just know that the signal it sends is that you think vandalism is okay. I myself am guilty of scoffing at etiquette and good appearances. Even now I sometimes wish that people could just focus on ‘what really matters’ in a meritocratic sense – who cares how someone dresses if they get the job done? Yes, it may be that someone who is over-concerned with superficial matters of dress and niceties is not a very useful person, but all those niceties do have a use in the end – they reassure everyone around you that you aren’t the sort of person who slashes tires and smashes mailboxes.

Of course, I doubt I need to lecture dentists on the value of professionalism. What I do need to point out is how ignorance of tech matters can leave you with broken windows you might not be aware of. Using technology to reach new customers is a double-edged sword, as it also gives you new opportunities to look unprofessional.

The most egregious example is, of course, overreacting to online reviews. More down to earth advice is to not let any web presence you maintain yourself, (as in your website or Facebook profile – not your Yelp page), fall into disrepair. ‘Disrepair’ can mean any number of things, but mostly just keep the information you have current. The only thing worse than a business with no web page is one that lists the wrong business hours – it is in effect a broken promise. If you told me that your listed hours are correct, I would ask you if you had bothered to post your holiday hours this past July 4? Do you take a week off in December? Is that reflected on your website? It’s a small thing, but as soon as the customer is affronted by some minor vandalism, they start to feel less safe. Your cheery disposition seems less sincere. Your next small mistake is magnified. It is not absurd to suggest you might lose a patient all for the simple fact that you forgot to say you were closed for Labor Day.

brkwinOn the subject of websites; minimal is fine, trashy is not. I have dismissed fears of ‘security’ in the past, saying that it was up to the user to be savvy enough to avoid the metaphorical dark alleys. Another way to look at it is to void broken windows, but in your website design and you general Internet exploration. To come full circle, poor grammar is the hallmark of scam-emails. As for websites: Pop-ups? Unwelcome videos? Bad colors? Only rudimentary html? You know it when you see it. There are would-be reputable websites that break the rules and come across as looking like a dangerous side street. Don’t surf there, and don’t let your own page become one of those neighborhoods.

More front and center are the computers in your office. I would not load up and Macs or otherwise dictate your practice management software simply for how sleek it looks, but you do have to be aware that what is on a computer screen does say something about its user, and your patients will look at your computers from time to time. I wouldn’t agonize over the decision, but desktop wallpapers should match some standard. Deskstops themselves should be free of clutter. No random files lying around because somebody was in a rush and wanted something to be convenient. I wouldn’t think “what does my desktop say about me?” but rather just be wary that the monitors in your office are yet another chance for you to reveal some broken windows.


Anti-Buzz: Wish Fulfillment

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it:

So I like to harp about statistics and science and how they get portrayed in common media. This stuff matters to to you, of course, because the care you provide is, ultimately, justified by studies and findings and conclusions drawn from medical science and statistics. I came upon this joke image a few days ago.


For some context, the Toronto Maple Leafs are a team in the NHL that has, for about 15 months now, consistently done better than stats geeks think they deserve to. That projected season record, if it were to come to fruition, would be the best regular season in the NHL by a mile. Of course, it’s all a joke, because any reasonable person understands that whether or not a team comes from a “C-name” city has nothing to do with the outcome of a hockey game. Besides, The Leafs beat the Calgary Flames 4-2 on Wednesday, so this silly C-curse is clearly hogwash.

The elephant in the corner is that, mathematically speaking, there is nothing incorrect or inappropriate about the above analysis. We “know” that letters in team names are completely disconnected from the cause-effect chains that govern hockey. And yet, there’s no rule that says what factors we can and cannot consider when performing an analysis – if there were, it would be a bane on innovation – and so we are left with the fact that, well and truly, the Maximum Likelihood Estimate for the probability that the Toronto Maple Leafs will defeat a team from a city that has a name that doesn’t begin with the letter ‘C’ is 100%.


Even correcting with a Bayesian prior, (In this case, assume you spot each category a free win and free loss), We still have the Leafs at roughly an 89% chance of beating any non-C team, and only a 17% chance of beating any C team. The projected season record in this case would be about 65 wins, still an all-time best for the NHL. Winning and playing against non-C teams is perfectly positively correlated, with a correlation coefficient of 1. The list of stupid statistics you can generate on this goes on, and none of them are invalid outside of the fact that our intuition rightly tells us they are invalid.

The criteria of C vs. non-C teams is patent nonsense, but this should tell you something about the statistical models we use to make important decisions with: they are still very much a product of human biases. I’m giving you a silly example to play with, but more legitimate analyses are still plagued by the problem that we choose what to examine.

newface-620x461To bring this back around to technology, the new ‘big data’ phenomenon is in part about just keeping all the information we can and letting the computer figure out what is and isn’t important. This is not as easy as it sounds, and there’s a bit of a chicken and egg problem in that generalized, ‘unbiased’ algorithms still need to have certain parameters to be set, and those parameters are set by a human.

To bring this in a more critical direction, my skin usually crawls when arrogant literati like Jean Baudrillard say stuff like: “Like dreams, statistics are a form of wish fulfillment.” (First of all, nobody would tolerate a mathematician being so curtly dismissive of an entire intellectual endeavor – they would be quickly branded as “emotionless” or “cold” or whatever else people say when they don’t want to listen mathematicians and scientists).

Yet, sometimes, like in the case of the Toronto Maple Leafs, the statistics really are a form of wish fulfillment.

Anti-Buzz Software

Anti-Buzz: Operating Systems

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it:


newface-620x461The Buzz: Operating Systems are a brand, inseparable from the machine they are sold with.

The Anti-Buzz: Operating Systems are just mediators, and more interchangeable than you might think.

Operating systems: How do they work? What is their relationship with your hardware?

Where is the operating system?

The first thing that might come to mind for most people is that when they think of operating systems, they probably only think of two options: Mac and Windows. This is fine, but it does ignore the fact that your smart phone, (even if it is a “Windows” phone), has an operating system, as does your tablet and your Kindle. Of course, pointing this out only exacerbates the first common misconception I’d like to destroy, which is that the OS has anything to do with your hardware. If your observe that the iOS is only on iPhones, Mac OS is only on Macs, and Windows is only on “PCs”, you’d be forgiven for thinking that the OS was in fact part of the hardware itself, that it was encoded, at least partially, in the actual transistors of your machine.

No, all operating systems are 100% pure software, and they 100% live on your hard disk. They are just programs, not unlike all that other software you use. They are, however, incredibly reliant on the hardware they work with, so in some way specialized cases, such as your smart phone, both the hardware and the operating system were designed with each other in mind. However, computing hardware and software have become increasingly modular in design, so it is likely that all of these components can be made interchangeable, with some work, (That is, iOS on your PC, Windows 7 on your Kindle, whatever).

In fact we have seen this already to some extent, as it was widely publicized that a little elbow grease could get Chrome OS installed on a Nook Color, effectively turning it into a tablet at a fraction of the cost. In a brilliant stroke of 21st century thinking, Barnes & Noble responded by saying “Cool, we’ll just sell them this way,” instead of the usual, paranoid protect-our-product-from-our-customers that we are used to seeing from the entertainment industry, (I wouldn’t be surprised if, being a bookseller-that-survived-the-great-reckoning, B&N has a better set of priorities than many software companies).

Anyway, less surprising, (But maybe you did not know?), is that you can install Windows on your MacBook, and Linux too if you are so inclined. Some of you might be a little surprised to think of a machine with multiple operating systems, (you just pick which one you want to use when you boot up), but that should drive home the point that operating systems are just software, and all your things: photos, music, documents are just data. In my case I listen to the exact same mp3s when I am using Windows as I do when I am using Linux – it’s just data, and the operating system is just a portal to that data.

How does the operating system use the hardware?

Again, the OS is extremely reliant on hardware, most especially the CPU. A compiled OS will only work with one architecture, which was the real sticking point in the 90s…


Anti-Buzz: Twenty Questions

Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it:

Computers generally run in a binary fashion, that is to say for any situation there are only two possible answers or next steps. How does that effect computing? Andrew gives us a good example using the old game of twenty questions:


newface-620x461Here and there I’m taken by the urge to impart some of the more abstract wisdom of computer science. I’ve made attempts in the past, perhaps with some success, but my approach has always been very direct: I try to explain a math or science topic in so-called layman’s terms, and I’m thinking I should do it the other way around. This article is the first of three parts, a sort of pilot program if you will, where I discuss three “normal” things and reveal their common computational elements. I’ll be moving in reverse chronological order. I hope to improve understanding of computation beyond just the gadgets you use everyday. The core ideas of computer science have been with humanity for a long time, independent of any screens and processors.

For this week, consider the game of Twenty Questions: One person thinks of something, and the other tries to guess what it is with a series of yes or no questions (limit 20). It is a straightforward enough parlor game, but if you play it enough times you might develop a sense of strategy, which is where the game stops being a free-flowing human endeavor and begins to look like a mathematical one.

What is the best question to ask?

We’ll get some obvious intuitions out of the way first. You typically want to avoid narrow questions. If you come out of the gate with “Is it a dog?” “Is it a cat?” “Is it a fish?” you are likely to burn through your questions too fast. “Is it an animal?” would have saved you a lot of trouble up front. Likewise, you need to respect the logical history of your questions. If you ask “Is it an animal?” and are met with “No” then it would be foolish to ask “Is it a dog?” Most people grasp these basic concepts, but suppose it has been put to you to figure out what the absolute best question to ask was, given any opponent or situation.

“Is it an animal?” is a good first question, but could it be better?

Anti-Buzz Hardware

Anti-Buzz: Do computers age?


Andrew has been writing Anti Buzz for 4 years resulting in almost 200 articles. For the next several weeks we will revisit some of these just in case you missed it:

The Buzz: Our expectations change too fast, and we dismiss machines too quickly.

The Anti-Buzz: Our expectations change not because of new hardware, but because of new utility.

Jokes about your computer being outdated before you can even unpack it from the box are a little old-hat, and also don’t seem to ring quite as true as they did in the 1990s, but I still meet people who are frustrated with the fact that their four-year old laptop is just not good enough. After four years your car is still more or less as functional as it was when you bought it, and after four years your home is still a home and you can still live in it and many of the things you use everyday might be several years old and still perfectly functional, so it is understandably upsetting when you need to turnover an expensive item like a computer with such frequency.

I discussed Moore’s Law last week, and gave a name to this frequent obsolescence. What I did not address was why we can buy a new computer, marvel at its state-of-the-art behavior, and then, years later, deride the same machine as slow and unreliable. By and large older computers don’t ever lose their functionality, so why do some of us perceive them as slow? Are the recalcitrant among us right to cling to their old laptop because it still more or less does everything right? More interesting: why do computers seem to age?

Do computers actually get slower with time?

Yes. Your car gets less fuel efficient over time, and your home suffers gradual weather damage. A computer is still a physical machine and it wears down too. Specifically, heat kills your machine, and it generates a lot of its own, and some outside forces can contribute to make it worse. For desktops and laptops, dust is a big issue, and all high-powered electronics are a vortex for dust and dirt. Dust can get caked onto components and prevent heat from dissipating, making your computer age even faster. If you are intent on using your machine longer than the conventional technophile would tolerate, seek help in how to keep the inside of your machine clean. Think of it like an oil change.

Additionally, your machine can be slowed by disk fragmentation, which is a cliche observation anymore, but it’s true. I wrote about in detail some time ago. Unlike physical wear and tear, fragmentation can be reversed, but it certainly contributes to an impression of slowness.

Corollary to fragmentation is the amount of software you install. When you first get a computer, you don’t have much on it. Its sole responsibility is to turn on and do what you ask. Over the next few years you slowly add to your computer’s list of responsibilities. A lot of software people install runs in the background: anti-virus scans, task bar widgets, e-mail notifications, Facebook notifications, Adobe update reminders, Java update reminders, and if your browser is open all of the time, you probably have a similar host of gadgets attached to it. All of these things take computing resources away from whatever it is you were telling your computer to do. Compare the service you receive in an empty restaurant with the service you receive in a crowded one. Your four-year old laptop is probably a very crowded restaurant.


Anti-Buzz: HBO’s Silicon Valley

newface-620x461HBO is currently running the first season of “Silicon Valley”, and now is as good a time as any to “anti-buzz” its depiction of the software engineering industry. It was pointed out to me that for decades television shows have been mangling the professions of cop, lawyer, and doctor en masse, so in as much as “Silicon Valley” gets it wrong, the software engineers of the world should probably remember that they’ll have to get in line to file a complaint. Hollywood exaggerates stereotypes, film at 11.

(Editor’s Note: Dentists too. As far as Hollywood is concerned dentists are either bumbling fools or sadisitc monsters. But that is another story)

Anyway, how I’m going to do this is walk through the pilot episode and categorize the gags as either accurateexaggerated, or gibberish. Let’s go.

In the opening scene we are treated to an expensive party which delivers two gags: Kid Rock is playing at a private party and nobody in attendance cares, and that the man who is celebrating his newly earned investment capital reminds us all that the best part of being a silicon valley start-up is that you get to make the world a better place. Verdict: accurate. Actually, this opening scene had me pretty excited to see the rest because it really nailed some things, namely that the Valley’s lavish spending on social events is often misguided, and that every company under the sun touts that they are making the world a better place. In fact, this joke about “making the world a better place” is one of the show’s recurring themes and it is a pretty accurate statement that tech companies like to spin everything they do as some great humanitarian good.

Next we see “the incubator” which is rather like a hacker hostel. Such living arrangements do exist, so I have to judge accurate, albeit begrudgingly because this is hardly the typical experience. One of the apps discussed in this scene is a “nipple tracker” which is certainly an exaggeration but a solid crack at some of the unfortunate and sexist applications that really do gain traction, such as this one.

Later we see the Hooli bus, (Hooli being the fictitious tech giant looming large in our story). The fact that tech companies arrange bus services for their employees is accurate but the bus itself is an exaggeration, as these companies don’t build and maintain fleets of corporate buses covered in corporate logos with televisions running corporate propaganda, but simply contract bus and shuttle routes from other companies.

Next, our protagonist encounters the brogrammers, who vacillate between exaggeration and gibberish.

Anti-Buzz Security Software

Anti-Buzz: XP Retired

newface-620x461I’m going to keep it simple and practical this week. Last time I mentioned in passing that Windows XP support has ended. I thought I would give a run down of what this might mean for you and others. In anti-buzz fashion, I’ll take the position that I’m trying to talk you down from your tree, though I understand that the typical reader of this column is tech-savvy enough to avoid any real hysteria, but for the sake of argument …

Myth #1: Windows XP no longer works!

Which is of course not true. For the most part, any system still using XP will continue to operate as normal. What you might not expect is that this is a problem in and of itself …

Myth #2: Wait, this isn’t a problem, who even uses XP anymore?

Maybe you don’t use XP anymore, but somebody you know does, maybe your ATM, maybe the mechanic who just changed your oil, (did you pay with your credit card?), maybe your doctor, (*ahem* or dentist). Windows XP was a very successful operating system and like or not it is part of the world’s cyber infrastructure. You don’t get to ignore the risks of unsupported XP just because you don’t personally use it any more.

In some ways, it would be better if every copy of Windows XP just up and died, because it would force everyone to change. It would be a PR disaster for Microsoft, and a financial burden for many others; don’t forget that the tech landscape in other parts of the world looks very different than what you might be used to, and XP is the low-cost norm in many countries.

Myth #3: Oh no, Windows XP is a huge security risk!

I don’t want to understate the potential security risks of an old, unsupported OS, but I also don’t want to overstate it. The truth is, nobody knows what the risks are or will be. As I pointed out a long time ago, security can be a matter of popularity. Macs at one points enjoyed greater “security” simply because they were less popular, and thus it made less sense for hackers to target the system. The risks associated with XP will be influenced by what segments of the population hold out on it, and for how long. Right now, there is enough XP-infrastructure that the risks could be very real. From the other side, the article I linked to last week was about a third party trying to take on the mantle of maintaining XP as a safe system. The effectiveness and affordability of such services will have a large impact on how XP’s retirement years play out.

windowsXP_end-of-lifeMyth #4: Windows XP is still a great system!

It doesn’t matter. What matters is the availability of software, and it will be increasingly hard to find modern software for the system, which is the biggest practical reason to stop using it if you haven’t already, (security reasons, as I’ve outlined, can be sort of nebulous). A market that can hold out against this are XP-as-machine situations, (like an ATM), where the existing system is basically “done” and has no pressing need for an update. Additionally, many developing markets are saturated by XP and it is likely that these markets will still see new software come off the line.

I think XP’s future might look a little like the Y2K scare, as odd as that sounds. Forget the details of the actual Y2K “bug”, the real problem was that a large portion of our computing infrastructure was laid on top of old system with a common flaw. If XP sticks around long enough, we might see a similar situation develop where some critical flaw is exposed and has a large impact on our day-to-day, and it might take this “scare” to provide the final push to move on from this older system.

Anti-Buzz Future Tech Internet

Anti-Buzz: Data Smog

In case you missed it, Anti Buzz from April 2013:

I’m not a social scientist, but I play one on the Internet. And so does everyone else. Which is sort of the problem with the Internet. If you’ve read my articles even a little bit, you know I’m pretty much a wild optimist about the positive impact computing technology has had and will have on humanity, but the inevitable fallout of everybody being able to talk to everybody is that everybody talks to everybody and meaningful interaction and information becomes harder to find. Most any technology comes with some added trouble. The automobile is a similarly revolutionary technology, and yet we also don’t question that it brings with it pollution and safety concerns, neither of which invalidate the net benefits of the technology, but you are a fool to think there is no cost involved.

And so the invention of continuous entertainment has its costs too. I’m not enough of an expert to speak to the genuine psychological effects of so called “information overload,” (but ostensibly there are some), but we all seem to have this intuitive agreement that it can be nice to “unplug” sometimes. Also, regular Internet users are very aware of how, despite the speed and efficiency of obtaining information, one can easily get stuck in an hours long adventure to nowhere, the Internet equivalent of just vegetating in front of the TV. While I am prone to use the “information through a straw” metaphor when discussing television, having greater control of what content we can ingest doesn’t actually stop us from binging on superfluous content.

One phrase for all this unhelpful information fluff is data smog, as coined in David Shenk’s book of the same name. Shenk was a healthy skeptic in 1997, aptly foretelling our current “unplug” impulse, but by his own admission there were many things he did not foresee that have reduced the amount of pollution in our information intake. He wrote a good short read outlining what he got right and wrong, and what technologies and institutions have helped make our online interactions more useful and meaningful.

There are a lot of articulate people out there, and blogging culture has democratized journalism to some extent, but we are sometimes left with the Internet feeling like one big argument. A severe cynic, gazing upon the beginning of the information age, would fear that people would prove too easy to manipulate, that they would lack the skills to think critically about the information they were given. While the same cynic might have criticized older media institutions as well, they likely would have made an appeal to the implied authority of any information that is able to make it through the hedge maze that is traditional media. They wouldn’t have trusted the proverbial average person to make smart decisions about what they read and heard.

With automobiles came advances in automobile safety, and so too have we been spared from “the cacophony” as one friend of mine used to worry. With concern over how well we spend our time and how well we use our brains comes more optimism in the technology and the humans who use it. Yes, we all know the sea of user-generated content can get silly, even alienating sometimes, but over the last decade people have proven rather savvy at sniffing out the garbage, and sifting out the gold. It’s actually quite remarkable how humanity, in aggregate, can be so honest most of the time. The cynic is wrong.

The severe optimist would take an anti-establishment view, using rhetoric not unlike my own, criticizing the practice of hoarding away knowledge behind a privileged media class, and predicting a bright future full of truth and clarity. This isn’t true either. A softened version of the cynic’s view is that people are more capable than ever of remaining insular. It’s a double-edged sword, on one hand it is excellent that people can associate by interests instead of simple geography, but on the other the fracturing of society can lead to a poor “ideological gene pool” if you catch my meaning. People never have their views challenged, and it becomes easier and easier for groups to resist change.

The statistician in me wants to point to [confirmation bias], (The best joke about confirmation bias: “Once I learned about confirmation bias, I began to see it everywhere.”) People are typically biased to make observations that confirm what they already believe, and not to seek out or notice things that don’t, creating information that is skewed in favor of prior beliefs. This is a real problem in statistical and scientific research, but it is also something we can casually observe in ourselves. The true data smog is our tendency to just walk the same road over and over again.

So apart from the standard “unplug” advice of turning off your phone, keeping emails brief, and avoiding extended web surfing, my practical advice toward a more useful online experience is to break away from your comfort zone. The danger of information overload isn’t only that you can’t filter, it’s that you can also filter too much. You can be so overwhelmed by all the information that you can forget that, hey, you have all the information. Read about something new.  Make new friends, or remember your old ones. Challenge yourself.