I recently read an article by Herb Sutter that claims that the long rise in processor speed is finally coming to an end. I certainly believe that this is going to happen eventually, maybe within the next decade, because we do seem to be approaching some physical limits; I didn’t think that it was happening quite yet, though. Sutter does present some interesting evidence in favor of his argument: in particular, there’s a graph which shows that the CPU speed of Intel’s processors actually stopped increasing two years ago, and that, if the trends from before 2001 had continued, we’d now have CPU’s approaching 10GHz instead of less than 4GHz.

And it’s true, Intel’s march for higher CPU speed has stalled. The thing is, though, I’m not sure how much weight to give to that argument. My understanding is that, with the Pentium 4, Intel decided that people payed more attention to CPU speed than to other metrics of CPU performance, so they pushed chips’ clock speed even if, say, it sometimes took more clock cycles to carry out the same action. (Which is why AMD started marketing their chips by translating their performance to Intel’s instead of touting their own clock rate.) Given a choice between, say, a 2GHz Opteron and a 3GHz Pentium 4, I know which one I would take. So maybe Intel was playing tricks that are catching up with them now; I’d like to see graphs like that from other manufacturers.

And if you look at the Intel graph in that article, the current plateau isn’t the only change in behavior – at around 1995, the rate of clock spead increase actually increased. If you extend the older line instead of the newer line, then Intel’s current clock speeds don’t look at all out of line. And it does seem that other manufacturers will be hitting 4GHz soon – for example, the recent press releases about IBM/Sony/Toshiba’s Cell processor claim that it will reach that mark. (Admittedly, I’m not sure when it will be released, or how long after release it will take for 4GHz models to appear.)

Still, I do buy the larger point of the article, that to continue to get increased performance, we’ll soon need to switch to other techniques, of which the most interesting is multithreaded code on multicore processors. As a loyal Sun employee, I have to get behind this: my group at Sun is eagerly awaiting the release of dual-core Opteron processors, and Sun’s forthcoming Niagara SPARC processor is going to be a lot of fun to work with. I hope that, one of these years, I have an excuse to program in a multicore environment; my current software team does multithreaded programming, but we do it in a fairly naive way. (And there’s nothing wrong with that, to be sure: simple solutions are better, as long as they get the job done.) Programs are already marching less in lockstep and acting more like a collection of semi-autonomous agents; how far can we take this? Is the number of processors in a computer going to grow exponentially? Are the processors going to get simpler and less powerful while this happens, or is each individual processor going to support as complex a piece of software as those on today’s single processors? If so, it’s going to be very exciting seeing what complex dances the software on these processors trace out, and what unexpected phenomena arise from that.

Down with authoritarianism in software design; long live anarchist collectives!

Post Revisions:

There are no revisions for this post.