New laptop & Another try at Ubuntu

Well, as I dicussed in a previous post, I have been in the market for a new laptop. I have finally bought one. I decided to go for a Dell XPS rather than Apple (mostly due to cost). Such is life – maybe I will try a Mac next year. It is my intent on my new laptop to either dual boot Vista and Ubuntu, or (if I have a good enough experience with Ubuntu), just run Ubuntu and do all of my Windows stuff in hosted virtual machines.

So, last night I take my brand new laptop, and my newly burned Ubuntu CD, and set out. Ubuntu boots up from the CD just fine, but the screen resolution sucks because Ubuntu is philosophically opposed to loading the drivers for my video card. No big deal, I can live with 800×600 until I get a proper install done. So, I click on the install icon, and away I go. Or, actually, I don’t. It seems the Installer UI is not expecting 800×600 resolution, and the buttons to let me proceed through the installation are lost off the bottom of the screen. I also do not seem to be allow to resize this window. It being midnight and all, I gave up. I am sure there is some way around this, but I did not feel like screwing with it.

I will probably have another shot at trying to set up Ubuntu or some other Linux distro this weekend. Maybe I will have better luck and not just give up on Linux (sorry folks – this is stuff that should just work!)

PS – Vista is working fine on my new laptop. Transfered my files and settings from my old machine using “Windows Easy Transfer” – not a problem.

Fred’s Laws – How not to write software

This begins a series of posts on Fred’s Laws – basically a set of anti-rules on how not to develop software.

Over the past twenty-odd years, I have seen a lot of software projects crash and burn. Many have been doomed from the start, while many others died slow, painful deaths after hopeful beginnings. Some have finished, and the systems are in production, without ever having realized that the project was a failure. Others should have failed, but managed to struggle through due to the heroic efforts of one or more dedicated (and usually really smart) people.

I have also seen more than a few “failed” projects that were technical successes. We built really cool software. We were on time, on budget, and had good quality. They failed in some other aspect – usually they were business failures for one reason or another.

The environments in which these projects have died have been varied as well. Some tried to make it with no process at all. Some had lots and lots and lots (and lots and lots) of process. I have not seen a great deal of correlation between process and success (well, except that the process I pick for my projects is always successful 😉 ).

When I look back on these catastrophic projects, usually I can see where things went wrong. In fact, most of the time I could see where they were going wrong while it was happening, like watching a car crash in slow motion, but was frequently powerless to avoid the impact. More often than not (in fact, I would be willing to say always), the root cause was something completely avoidable (from a technical or project perspective). Never was it because we chose Windows over Linux (or vice versa), nor because of the programming language we chose, nor because what we set out to do was technically impossible.

As I have written Fred’s Laws (well, written them in my head, none of them are actually written yet!) it occurs to me that they all seem to be straight from the department of the bloody obvious. No rocket science here. If they are this obvious, why even write them down. Well, the reason is that, despite how really obvious all of this is, I watch projects not do them all the time. Most of the time, in fact.

So, stay tuned. I am going to try to post one law per day (or so) until I run out of ideas.

BTW, as a little footnote, I have been involved in a few successful projects along the way. It just always seems to be the ones that failed (and failed spectacularly) that stick out in my memory.

Where am I?

I just realized that it has be almost 2 weeks since I posted anything substantive here. Once again, life gets in the way of blogging. I have also been in a fairly negative place with respect to the whole software/high tech industry and evaluating what part I want to play in it, so anything I write tends to wander off into a rant.

I have a couple of longer, multi-part posts I have been working on, and hope to get someting out this weekend.  

Is Linux Really Ready for Simple Users?

This is a good series of articles over on desktoplinux.com Is Linux Really Ready for Simple Users? (Part 1 of 8 ). Whether you agree or disagree with some of the details of his analyses, it is good to see someone taken an analytical look, rather than the usual ranting and raving of “Linux is great because Microsoft is evil”.  

A Picture of the Multicore Crisis -> Moore’s Law and Software

I was reading A Picture of the Multicore Crisis, and got to thinking of something which has bothered me for a long time. This issue is related to Moore’s Law and the growth of processing capacity (whether through raw clock speed, or the multicore approach, or magic and hampsters). Looking at the last 10 years or so, we probably have something like 10-20 times the processing power we had 10 years ago.

As a producer of server-side software, a user of server software, etc., it makes me wonder – why are my servers (document management, document production, and many others) not providing a corresponding increase in throughput? Why do many server systems maintain the same performance over time, or offer only marginal improvements?

(I leave aside client side performance for now, because on the client side much of the performance improvements have shown up in different ways, such as new capabilities like multimedia, prettier graphics in the UI, the ability to multitask and keep 10 different applications open at the same time).

So, why are my servers not 10 times as fast as they were? I can think of a few reasons:

  1. As has been discussed in other places, the shift from clock-speed-driven improvements to a multicore approach has had an impact. Much software, especially older software, is not written in a way which takes advantage of multiple processors. And often, re-engineering this software to better use multiple processors is non-trivial, especially when you have to worry about things like backwards compatibility and supporting a large number of customers, finding time to add the new features product management wants, etc. Very few of us can afford to divert a significant group of our development resources for an extended period of time, and it is frequently hard to justify from a business perspective.
  2. Even if your software is architected for multiple processors, oftent he algorithm is inherently “single threaded” in places, which throttles the whole process.
  3. Also, even if you are well architected for multiple processors, this does not come for free. The overhead introduced in managing this algorithm can easily consume a non-trivial portion of your processor gains.
  4. Even excluding the shift to multicore, much software has not kept up with performance improvement provided through pure clock speed. There are a number of reasons for this:
    • We are frequently very feature driven. The desire to compete, expand and grow often leads us to add features to existing software at an alarming rate. Wile this is necessary from a business perspective, often the addition of these new features slows down the software faster than the hardware speeds it up. Note, this is why I think it is very important to be architect software so as to be able to isolate “core” processing from “features”. This way, features can be removed from the configuration when not needed, and not allowed to impede performance. Also, this is why it is important in each cycle of development on a product to assess whether performance on the same hardware is at least as good.
    • Processing power is not the whole story (yeah, I know, we all know this). Much of our software is not entirely CPU bound. The bottlenecks are often elsewhere. Much of our processing, especially for large documents, is more bound by memory, disk speed, network speed, and dependencies on other systems. Given that, there is only a limited amount of benefit to be gained through pure processor speed.

PC World – ISO Rejects Microsoft’s OOXML as Standard

ISO Rejects Microsoft’s OOXML as Standard

The title is somewhat misleading – OOXML was not rejected as a standard, but the attempt to fast track its approval failed. This is a good thing. While a setback for Microsoft, it now will allow some of the comments raised against the specification to be addressed before a new vote occurs.

Unfortunately, it means we get to listen to much more of the ODF vs OOXML, “Microsoft is evil” babel.

Such is life.

MacBook Pro versus Non-Mac Laptop

I am speculating about my next laptop purchase, and as always, I like to think of alternatives to my current environment (which is an Inspiron 9400, 2.0 GHz core Duo, 2 gb of RAM, running Vista). I do not own a desktop machine, and really do not see myself going that route, unless I were to get a new iMac purely as a luxury.

I am thinking of the following options:

  1. An Apple MacBook Pro (17″, 2.4 GHz, 2 gb of RAM, High resolution display, everything else pretty much standard)
  2. Dell Inspiron 1720, configured pretty much the same as the MacBook (except with only a 2.2 GHz processor) with Windows Vista or XP
  3. The same Dell, but loaded with some flavour of Linux.

(note that whatever choice I make, I would still have to run Vista and/or XP somewhere on the machine in order to co-exist with the real world)

These options appear very similar to me. I even configured the Dell with stuff I might not care about (like a built-in webcam), in order to make the comparison “fair”.

The main difference I see is the price. I can get the Dell for about $1800 (Canadian), whereas the MacBook Pro comes in at over $3200. Is there anything about the Apple which justifies this price, beyond being “cool”? Is the price of the Mac OS really $1400? Or is there something else intangible here?

As much as I might like the MacBook alternative, I cannot really see justifying the cost.

Can anyone tell me why I (or they) would let Apple overcharge them like this?