Second Guessing Second Life: Branding Strategy Insider – it is nice to see that I am not the only person who is not terribly impressed with Second Life (see my previous post on the subject)
I was reading this interesting post Coté’s Excellent Description of the Microsoft Web Rift « SmoothSpan Blog, as well as the post to which it refers. It is an interesting discussion of the fears many have with respect to choosing to work with Microsoft technologies versus non-Microsoft. The chain is worth a read, whether you agree with the ideas or not.
One statement I found particularly interesting was
This thing he calles “lock-in fear” and the extreme polarization (encouraged by Microsoft’s rhetoric, tactics, and track record) that you’re either all-Microsoft or no-Microsoft is my “web rift”.
While I would not disagree that Microsoft strongly encourages the use of its tools and technologies (after all, that is what most companies do, isn’t it?), I see far more rhetoric and tactical positioning on the part of non-Microsoft, anti-Microsoft, and Open Source communities insisting that one must be 100% non-Microsoft (and preferably not even play nice with anything Microsoft), or you are obviously a Microsoft fan boy.
I guess that the point that I am making is that a large part of the “lock-in fear” is created not by Microsoft’s behaviour, but by the behaviours of the anti-Microsoft crowd.
This is a very amusing analogy, since it was the “free market economy” which created Microsoft’s success, and continues to sustain them. They are not being propped up artificially through government subsidies or bailouts, as so many companies in other industries seem to be. They are not trying to force governments or the courts to force their competitors to give up proprietary information or abandon markets to make it easier to compete.
In reality, it is the open source community, the “capitalism is evil” crowd, and those lobbying to take Microsoft down legislatively or litigiously who more resemble socialists/communists – “all intellectual property belongs to everyone”, “the government should intervene to level the playing field”, and other such crap.
The reality is, if you truly believe in the world of “free markets and open ideas”, the you believe that better ideas, smarter people, and better business models will ultimately prevail. This is the world in which Microsoft has played successfully for 20+ years. It is this model by which others can ultimately defeat Microsoft. It is Microsoft’s competition which seems unable to live within this model.
This is an interesting post, and fits in well with other things which have been on my mind lately, and with things about which I have posted.
It occurs to me that over the years, I really have let the world steal my dreams. I think we all do this – we get so wrapped up in the day-to-day “operations” of life that we lose track of the grand visions. We also tend to be told that we need to think realistically, and be reasonable, and play it safe. We spend much of our lives being taught what is possible, and even worse, what is impossible. I think that is why so much advancement in science, arts, and other fields comes from the young, because they have not yet learned that what they are trying to do is “impossible”.
One of the nice things about a grand vision is that you spend much less time worrying about whether it is possible of not, and more time just working towards it.
I was reading A Picture of the Multicore Crisis, and got to thinking of something which has bothered me for a long time. This issue is related to Moore’s Law and the growth of processing capacity (whether through raw clock speed, or the multicore approach, or magic and hampsters). Looking at the last 10 years or so, we probably have something like 10-20 times the processing power we had 10 years ago.
As a producer of server-side software, a user of server software, etc., it makes me wonder – why are my servers (document management, document production, and many others) not providing a corresponding increase in throughput? Why do many server systems maintain the same performance over time, or offer only marginal improvements?
(I leave aside client side performance for now, because on the client side much of the performance improvements have shown up in different ways, such as new capabilities like multimedia, prettier graphics in the UI, the ability to multitask and keep 10 different applications open at the same time).
So, why are my servers not 10 times as fast as they were? I can think of a few reasons:
- As has been discussed in other places, the shift from clock-speed-driven improvements to a multicore approach has had an impact. Much software, especially older software, is not written in a way which takes advantage of multiple processors. And often, re-engineering this software to better use multiple processors is non-trivial, especially when you have to worry about things like backwards compatibility and supporting a large number of customers, finding time to add the new features product management wants, etc. Very few of us can afford to divert a significant group of our development resources for an extended period of time, and it is frequently hard to justify from a business perspective.
- Even if your software is architected for multiple processors, oftent he algorithm is inherently “single threaded” in places, which throttles the whole process.
- Also, even if you are well architected for multiple processors, this does not come for free. The overhead introduced in managing this algorithm can easily consume a non-trivial portion of your processor gains.
- Even excluding the shift to multicore, much software has not kept up with performance improvement provided through pure clock speed. There are a number of reasons for this:
- We are frequently very feature driven. The desire to compete, expand and grow often leads us to add features to existing software at an alarming rate. Wile this is necessary from a business perspective, often the addition of these new features slows down the software faster than the hardware speeds it up. Note, this is why I think it is very important to be architect software so as to be able to isolate “core” processing from “features”. This way, features can be removed from the configuration when not needed, and not allowed to impede performance. Also, this is why it is important in each cycle of development on a product to assess whether performance on the same hardware is at least as good.
- Processing power is not the whole story (yeah, I know, we all know this). Much of our software is not entirely CPU bound. The bottlenecks are often elsewhere. Much of our processing, especially for large documents, is more bound by memory, disk speed, network speed, and dependencies on other systems. Given that, there is only a limited amount of benefit to be gained through pure processor speed.