I was reading A Picture of the Multicore Crisis, and got to thinking of something which has bothered me for a long time. This issue is related to Moore’s Law and the growth of processing capacity (whether through raw clock speed, or the multicore approach, or magic and hampsters). Looking at the last 10 years or so, we probably have something like 10-20 times the processing power we had 10 years ago.
As a producer of server-side software, a user of server software, etc., it makes me wonder – why are my servers (document management, document production, and many others) not providing a corresponding increase in throughput? Why do many server systems maintain the same performance over time, or offer only marginal improvements?
(I leave aside client side performance for now, because on the client side much of the performance improvements have shown up in different ways, such as new capabilities like multimedia, prettier graphics in the UI, the ability to multitask and keep 10 different applications open at the same time).
So, why are my servers not 10 times as fast as they were? I can think of a few reasons:
- As has been discussed in other places, the shift from clock-speed-driven improvements to a multicore approach has had an impact. Much software, especially older software, is not written in a way which takes advantage of multiple processors. And often, re-engineering this software to better use multiple processors is non-trivial, especially when you have to worry about things like backwards compatibility and supporting a large number of customers, finding time to add the new features product management wants, etc. Very few of us can afford to divert a significant group of our development resources for an extended period of time, and it is frequently hard to justify from a business perspective.
- Even if your software is architected for multiple processors, oftent he algorithm is inherently “single threaded” in places, which throttles the whole process.
- Also, even if you are well architected for multiple processors, this does not come for free. The overhead introduced in managing this algorithm can easily consume a non-trivial portion of your processor gains.
- Even excluding the shift to multicore, much software has not kept up with performance improvement provided through pure clock speed. There are a number of reasons for this:
- We are frequently very feature driven. The desire to compete, expand and grow often leads us to add features to existing software at an alarming rate. Wile this is necessary from a business perspective, often the addition of these new features slows down the software faster than the hardware speeds it up. Note, this is why I think it is very important to be architect software so as to be able to isolate “core” processing from “features”. This way, features can be removed from the configuration when not needed, and not allowed to impede performance. Also, this is why it is important in each cycle of development on a product to assess whether performance on the same hardware is at least as good.
- Processing power is not the whole story (yeah, I know, we all know this). Much of our software is not entirely CPU bound. The bottlenecks are often elsewhere. Much of our processing, especially for large documents, is more bound by memory, disk speed, network speed, and dependencies on other systems. Given that, there is only a limited amount of benefit to be gained through pure processor speed.
its the best post from you, thanks a lot
LikeLike
disc access need improvement, raptors now very good.. waiting for real ssd ..
LikeLike
Fred, on the server side you may be a victim of the access time for disks. Being able to move bytes from disk to main memory to caches and finally to the cpu has not multiplied at Moore’s Law rates. Take disks. The poor things would have had to increase rpms at the same rate clock speeds have gone up. My 10K Raptors are nice, but they should be doing a couple of hundred K!
Best,
BW
LikeLike