The Anti-Buzz: Not exactly …
Why: Sometimes a whole is greater than the sum of its parts, and sometimes too many cooks spoil the broth.
Multi-cores are in many ways exactly what you think they are, CPUs with multiple processors, doing multiple things at once. To understand how this gets complicated, consider how an older, single-core processor works. With single cores, multi-tasking is in literal truth an illusion. A CPU is only doing one thing at a time, and yet you still sat their in front of your computer, using the mouse, typing on the keyboard, playing music, and browsing the web. What was going on was that one CPU, with its billions of clock cycles, was spending a microsecond paying attention to the mouse, a microsecond paying attention to the keyboard, a microsecond playing music, and so on. It alternates between tasks so quickly that you (usually) don’t notice.
Now with multiple cores you can actually have one core doing one set of tasks, another doing a different set, and so forth. Each core is still rapidly changing tasks – your OS alone is full of a few dozen or more that need constant attention, but much work is well and truly happening simultaneously.
So what’s the problem? Well, how many processors does it take to screw in a light bulb? One. Some tasks don’t get done faster with more help. Four people don’t screw in a light bulb faster than one. Now, let’s say the bulb is up high and you need a ladder. Two people can speed this up if one fetches the ladder while the other fetches the new bulb. However if the bulb and the ladder are both in the same shed, these people might get in each other’s way. Also, when comes time to screw in the new bulb, the second person becomes superfluous again. Either way, four people is still too many.
Now let’s say you are vacuumming the house. Clearly two people will get the job done in about half the time. And three people will get it done even quicker. Eventually you can have too many people and too many vacuums and things get crowded, but for the most part the workload can be cleanly divided and scales well.
Even better, imagine you are fetching a lot of boxes from a very high shelf. You can only carry one box at a time, and so you go up the ladder, grab a box, come down, place the box where you need, and repeat. If there was somebody helping you, you could stay at the top of the ladder and pass boxes down to them. While you grabbed the next box, they could put the previous box where it belongs. In this situation, having two people more than doubles your efficiency, (Nobody is wasting time climbing the same ladder over and over).
Now imagine that you have many people performing many tasks. Great! One person is in the kitchen peeling potatoes, throwing the peels in the trash. Another is going through the house, taking out all the trash. If these two people don’t communicate with each other, you’ll end up with potato peels on the floor. With communication, the potato peeler will still have to stop and wait for the trash can to become available again, or they could put the peels onto a cutting board instead, but this means we are using a resource that previously wasn’t part of our equation.
All of the above make for pretty apt metaphors for the benefits, drawbacks and hazards of multi-core processing. Everything depends on circumstance, and many circumstances cannot be easily predicted. Clearly more workers means more can get done, but it also creates more work in the form of coordination and communication. A lack of such effort can create idleness when people fall over one another. Some such coordination is necessary if you are to prevent errors, (potato peels on the floor), that previously would have never happened, (One person wouldn’t peel a few potatoes, take out the trash, and then put peels on the floor – but two might).
So we’ve established that multi-cores don’t just simply improve performance by a factor of how many cores you have. Additionally, the “extra planning effort” that multi-cores require manifests itself as extra code, meaning more processes required to get the right things done. This code can exist in a number of places along the development path, (the OS itself, the application you are using, or even the compiler that put them together in the first place), but the short story is that depending on what software you are using, the appropriate planning overhead may or may not have been implemented in your software, meaning that you might only situationally enjoy the full benefits of multi-core processing.
Now for some homework: Consider this Intel CPU.
Some of that page reads like Star Trek technobabble:
Smart Cache provides a higher-performance, more efficient cache subsystem while HD Boost accelerates overall system performance across your multimedia applications.
And some of it sound like the hip double-speak that gets parodied in Dilbert:
Intel Turbo Boost technology maximizes speed for demanding applications, dynamically accelerating performance to match your workload-more performance when you need it the most.
I have a love/hate relationship with Intel because they make the best processors in the world, and then they over-hype them with marketing lingo. Dissecting one of their products might seem to border on libel, so before I proceed understand that I generally favor Intel over rival AMD and believe them to be the superior product. This particular processor was chosen because it is a good talking point for addressing a number of buyer concerns in the CPU market.
Consider the pieces of this ad: This Intel Core i7-860 …
1) .. is Quad-Core – meaning it has four cores. We’ve explained what this means already.
2) … has an 8MB L3 Cache – caching is a little hard to explain in laymen’s terms. The best summary is that the cache is a tiny little piece of super-duper high-performance RAM that can keeping parts of memory that are in frequent use very close to the CPU. This improves performance, but it is about as predictable as the lottery, given how new tasks typically will want to walk all over the old ones. Caching is so tricky that you can’t even strictly say that a bigger cache is better, (There is a reason that the cache is so small in comparison to your RAM). I would generally consider cache stats to be a wash – don’t factor them into your buying decisions.
3) … has an Integrated DDR3 Memory Controller – which is good, but this basically just means that this processor is designed to work with the newest standard of RAM. It’s kind of like a washing machine boasting that it takes High-Efficiency detergent; in this price range you should expect nothing less.
4) … has an Advanced Smart Cache – there’s that “cache” word again.
5) … has Turbo Boost technology – this will take some work. What this does is change the clock speed of individual cores so that the CPU as a whole is running at maximum power capacity at all times. The reason we have multi-cores now is that excessive power consumption was melting CPUs when we ran them so fast. Instead of making faster processors, we just made, well, more processors. The name “Turbo Boost” is a marketing gimmick, but the technology is sound. When one processor has too much to do, it will be sped up at the expense of the other cores. The net effect, really, is that Turbo Boost takes steps to smooth out the lack of multi-core planning that any of your software might be guilty of. I would say that this feature has “Giant Suite of Office Management Software” written all over it.
6) … has Wide Dynamic Execution – This is probably Intel’s biggest advantage over the competition right now, and ironically the least hyped of its accomplishments. The short version: Intel’s newest processors take in four instructions at a time, while AMD only gets three. This means exactly what it sounds like. Intel at 2.5 GHz gets 33% more done than AMD at 2.5 GHz. And this capacity is fully taken advantage of by …
7) … implements Hyper-Threading Technology – like “Turbo Boost” the name of this technology is just marketing pap, but right now only Intel processors do this. The feature requires cooperation from the OS but every major system already supports it, even Linux. Effectively your OS pretends that you have twice as many cores a you really do, and so assigns twice as much work to each core. This technology has been around for while, but it really only shines now because of the wide execution explained above. One gives a wide berth and the other makes sure that it’s getting filled to capacity. However, Intel’s claim that “Hyper-Threading technology effectively doubles the output” is an outright lie. In practice, benchmarks show that it gives you about 20% better performance, with situational variance.
Is this an Intel ad? Not exactly, just an honest picture of the current CPU market. AMD thrives on its lower prices, and hides behind GHz to make you think you are getting more than you are, but AMD is also an innovator, and so things will probably change. Do you need this Intel CPU – probably not in every machine, it’s a bit pricey, but now you have the anti-buzz-word tools to make sense of the lower-priced models.