Fred’s Laws – Part 0

Well, I am off to a good start. I was not counting on some issues around moving to my new laptop.

 All sorted out now, though. Hopefully tonight I will have Part 1 of this posted.

Advertisements

Fred’s Laws – How not to write software

This begins a series of posts on Fred’s Laws – basically a set of anti-rules on how not to develop software.

Over the past twenty-odd years, I have seen a lot of software projects crash and burn. Many have been doomed from the start, while many others died slow, painful deaths after hopeful beginnings. Some have finished, and the systems are in production, without ever having realized that the project was a failure. Others should have failed, but managed to struggle through due to the heroic efforts of one or more dedicated (and usually really smart) people.

I have also seen more than a few “failed” projects that were technical successes. We built really cool software. We were on time, on budget, and had good quality. They failed in some other aspect – usually they were business failures for one reason or another.

The environments in which these projects have died have been varied as well. Some tried to make it with no process at all. Some had lots and lots and lots (and lots and lots) of process. I have not seen a great deal of correlation between process and success (well, except that the process I pick for my projects is always successful 😉 ).

When I look back on these catastrophic projects, usually I can see where things went wrong. In fact, most of the time I could see where they were going wrong while it was happening, like watching a car crash in slow motion, but was frequently powerless to avoid the impact. More often than not (in fact, I would be willing to say always), the root cause was something completely avoidable (from a technical or project perspective). Never was it because we chose Windows over Linux (or vice versa), nor because of the programming language we chose, nor because what we set out to do was technically impossible.

As I have written Fred’s Laws (well, written them in my head, none of them are actually written yet!) it occurs to me that they all seem to be straight from the department of the bloody obvious. No rocket science here. If they are this obvious, why even write them down. Well, the reason is that, despite how really obvious all of this is, I watch projects not do them all the time. Most of the time, in fact.

So, stay tuned. I am going to try to post one law per day (or so) until I run out of ideas.

BTW, as a little footnote, I have been involved in a few successful projects along the way. It just always seems to be the ones that failed (and failed spectacularly) that stick out in my memory.

A Picture of the Multicore Crisis -> Moore’s Law and Software

I was reading A Picture of the Multicore Crisis, and got to thinking of something which has bothered me for a long time. This issue is related to Moore’s Law and the growth of processing capacity (whether through raw clock speed, or the multicore approach, or magic and hampsters). Looking at the last 10 years or so, we probably have something like 10-20 times the processing power we had 10 years ago.

As a producer of server-side software, a user of server software, etc., it makes me wonder – why are my servers (document management, document production, and many others) not providing a corresponding increase in throughput? Why do many server systems maintain the same performance over time, or offer only marginal improvements?

(I leave aside client side performance for now, because on the client side much of the performance improvements have shown up in different ways, such as new capabilities like multimedia, prettier graphics in the UI, the ability to multitask and keep 10 different applications open at the same time).

So, why are my servers not 10 times as fast as they were? I can think of a few reasons:

  1. As has been discussed in other places, the shift from clock-speed-driven improvements to a multicore approach has had an impact. Much software, especially older software, is not written in a way which takes advantage of multiple processors. And often, re-engineering this software to better use multiple processors is non-trivial, especially when you have to worry about things like backwards compatibility and supporting a large number of customers, finding time to add the new features product management wants, etc. Very few of us can afford to divert a significant group of our development resources for an extended period of time, and it is frequently hard to justify from a business perspective.
  2. Even if your software is architected for multiple processors, oftent he algorithm is inherently “single threaded” in places, which throttles the whole process.
  3. Also, even if you are well architected for multiple processors, this does not come for free. The overhead introduced in managing this algorithm can easily consume a non-trivial portion of your processor gains.
  4. Even excluding the shift to multicore, much software has not kept up with performance improvement provided through pure clock speed. There are a number of reasons for this:
    • We are frequently very feature driven. The desire to compete, expand and grow often leads us to add features to existing software at an alarming rate. Wile this is necessary from a business perspective, often the addition of these new features slows down the software faster than the hardware speeds it up. Note, this is why I think it is very important to be architect software so as to be able to isolate “core” processing from “features”. This way, features can be removed from the configuration when not needed, and not allowed to impede performance. Also, this is why it is important in each cycle of development on a product to assess whether performance on the same hardware is at least as good.
    • Processing power is not the whole story (yeah, I know, we all know this). Much of our software is not entirely CPU bound. The bottlenecks are often elsewhere. Much of our processing, especially for large documents, is more bound by memory, disk speed, network speed, and dependencies on other systems. Given that, there is only a limited amount of benefit to be gained through pure processor speed.

Service Oriented Architecture is your Ticket to Hell?

With reference to Service Oriented Architecture is your Ticket to Hell, it always amuses me how people insist on calling any idea which does not agree with their own, “bullshit” – always thinking in terms of absolutes, and believing “my ideas are great, yours are BS”. Remember, an idea is a dangerous thing when it is the only one you’ve got. The statement that Service Oriented Architecture (SOA) increases agility can be interpreted in two ways: as increasing the agility of your architecture, or as increasing your ability to adhere to the dogma of “agile development” (which has been bastardized as much as all dogma ultimately is).

(of course, I tend to think of SOA in the dogmatic view of Erl as somewhat bastardized as well, and I do not recognize his authority on the subject as absolute. I was modeling systems as collections of autonomous interacting objects/services years before the term was hijacked)

I will start by looking at the closing statement of the post, since I actually agree with it:

What I am saying is that, if SOA is scaled up without precaution, it can create systems so precarious that anyone asked to maintain them will feel like s/he’s won a ticket to programmer hell.

While I agree with this statement, I do not agree with specifically targeting SOA. This statement applies equally well to any architectural model, including any emergent architecture coming out of an agile development project.

Lets now look at the two specific concerns expressed with SOA.

It is not entirely clear to me that SOA requires excessive amounts of “up front” architecture. The only locked in architectural decision is the one to model your system as a system of interacting services. Even the choice of what kind of a service bus to use should not imply lock-in, since if you implement things properly, it is not particularly onerous to move services from one context to another. And the decision to model your system as a collection of loosely coupled services does increase the agility of your project, in some respects. Need an additional execution component? It is fairly easy to implement it without disrupting the rest of the system. Need to take one out, or change its implementation? Same thing.

Looking at the second concern, I would agree that is possible to create “strange loops” and other architectural oddities through unconstrained application of service oriented architectures. The same was said for a long time about inheritance dependencies in object oriented systems.  It remains important for the architect of the system itself to understand the implications of any services being used. This is an inherent complexity of large, complex, distributed systems.

(as an aside, this is a fundamental problem I have with agile methodologies – the idea that up front architecture is sacrilege – and I have seen little to no evidence the agile methodologies scale to large, complex projects).

As for the comparison between object oriented approaches and SOA, I do not see the two approaches as being mutually exclusive. What are services but large scale objects which respond to messages and provide a service/behaviour? Much of the same modeling concepts which apply to OOAD also apply at the larger scale (of course some do not – such as granularity of operations).

Ultimately, I find SOA to be a useful approach to modeling large, complex distributed systems (and yes, I have built a few). Is it perfect? Probably not. Are the “gothcha’s” in there if you apply it blindly, and without due thought? Absolutely – the same as any other approach I have seen. Is it the correct approach for every system and every project? Absolutely not. It is one approach. It pays to know more than one, and to use the correct one in the correct situation.

What Microsoft Doesn’t Want You to Know about WPF

Looking at Eric Sink’s post What Microsoft Doesn’t Want You to Know about WPF – gee, I thought I was the only person who coded on vacation (at least that is what my wife tells me).

Anyway, I agree with the observation that “beautiful” is definitely not the default for WPF – certainly not until Microsoft’s toolset catches up. Maybe then beautiful will be the default, or at least a selectable option.

I guess the point, though, is that WPF is supposed to let you separate design from coding, and enable you to let designers design, and programmers program. I have never actually seen this work in the real world, but I am forever hopeful. The fact is, though, that no technology or tool is going to protect you from creating ugly designs – the same as using the right language will not guarantee you will not produce bad code, and having the right process does not guarantee that your project will be a success. All it does is improve your odds a little. Maybe. if you are lucky.

Is Vista as bad as they say?

Over the last few months (or the last year or more), it has become extremely fashionable to beat up on Vista. Heck, it is a great way to generate hits on you site or blog, maybe get Dugg, whether you have anything useful to say or not. I am talking about posts like this, or this, or this whole blog.

Personally, I run Vista on several machines, and have few problems which were not related to the failure of third parties to provide updated drivers, or updated versions of software for Vista (sometimes makes me wonder if there has been a conspiracy on the part of other vendors to purposely sabotage Vista – but it is more likely just not bothering to provide what customers pay for). I also still run XP on a couple of boxes, and Win2K3. On my main development box, I also run a number of operating systems in VMWare, including WinXP, Win 2K3, Fedora, Ubuntu, and several “minimalist” Linux distros for playing around with.

An unfortunate fact of life is that all operating systems available right now suck, at least in some aspect or another. Linux suffers from many driver limitations (though this is getting better), and a wannabe user interface that spends far too much time trying to look like Windows, while missing the point of usability altogether. Windows (all versions) suffer from security issues, and from performance and stability issues inherent in trying to be all things to all people. I will not comment on Mac OSX, because I have not run it. It is also kind of irrelevant, since I cannot run it unless I buy Apple’s hardware.

Vista has its own usability issues. Some that are pointed out are valid. The UAC implementation is moronic. The UI path you have to follow to connect to a wireless network is annoying. Here is one I discovered today – disk defragmentation. When you defragment you hard drive you get this useful dialog:

defrag

Isn’t that helpful? No progress indication. No estimated time to completion. Just a statement that it could take anywhere from a few minutes to a few hours. Gee, thanks.

The problem is, this kind of thing is not just a problem in Vista, or Windows in general. It is pervasive in all operating systems, and almost all software written to run on them. Most software is filled with minor little usability gaps like this.

So stop beating up on Vista (unless you need the traffic), and start thinking about how to make the whole situation better.