Five easy ways to fail?

Ok, so I just read Five easy ways to fail, which itself is just a quote from his article on Inc. Magazine. While I usually find Joel’s stuff intelligent, even when I do not agree with it, and I actually agree with much of the article, the piece quoted on his blog is one of the most mind-numbingly stupid statements I have ever heard outside of a political speech.

“Even though a bad team of developers tends to be the No. 1 cause of software project failures…”

I have never seen any statistics which support this statement. In 20+ years, I have never been part of a project (either as a member or as an observer) which would support this statement. I have been involved in projects where stellar teams overcame bad management, bad scheduling and many other common obstacles, but never have I seen a well-managed, well-thought-out project fail because the programmers just were not smart enough. I would challenge Joel to provide any evidence to support this.

Then again, I have never seen anyone stupid enough to have hired an entire team of stupid people, and then been stupid enough to keep them. If this is the case, you have a much more serious problem than dumb programmers.

Also, while it would be nice to have the luxury of hiring only exactly the developers who fit your profile, that is a luxury most of us do not have (see my previous post on hiring). The reality is that you are almost always going to have a distribution of talents on your team – you are going to have stars, you are going to have duds, and you are going to have everything in between. I am always guided by an article I read in Harvard Business Review many years ago, where the late Bill Walsh talked about building great teams. The basic idea was that in any team of ten people, you are typically going have 2 people who are so good, they are going to over-achieve no matter what you do. You will also likely have 2 people who will under-achieve no matter what. The six in the middle may under-achieve or over-achieve, depending upon how they are led. And the deciding factor as to whether you have a stellar team, or a failing team, depends upon how those six in the middle are guided/managed/coached/led.

To say that most projects fail because the team is not competent is not statistically supported, is overly general in the extreme, and smacks of the kind of statement bad managers make to cover the fact that they are bad managers.

Advertisements

Fred’s Laws – How not to write software

This begins a series of posts on Fred’s Laws – basically a set of anti-rules on how not to develop software.

Over the past twenty-odd years, I have seen a lot of software projects crash and burn. Many have been doomed from the start, while many others died slow, painful deaths after hopeful beginnings. Some have finished, and the systems are in production, without ever having realized that the project was a failure. Others should have failed, but managed to struggle through due to the heroic efforts of one or more dedicated (and usually really smart) people.

I have also seen more than a few “failed” projects that were technical successes. We built really cool software. We were on time, on budget, and had good quality. They failed in some other aspect – usually they were business failures for one reason or another.

The environments in which these projects have died have been varied as well. Some tried to make it with no process at all. Some had lots and lots and lots (and lots and lots) of process. I have not seen a great deal of correlation between process and success (well, except that the process I pick for my projects is always successful 😉 ).

When I look back on these catastrophic projects, usually I can see where things went wrong. In fact, most of the time I could see where they were going wrong while it was happening, like watching a car crash in slow motion, but was frequently powerless to avoid the impact. More often than not (in fact, I would be willing to say always), the root cause was something completely avoidable (from a technical or project perspective). Never was it because we chose Windows over Linux (or vice versa), nor because of the programming language we chose, nor because what we set out to do was technically impossible.

As I have written Fred’s Laws (well, written them in my head, none of them are actually written yet!) it occurs to me that they all seem to be straight from the department of the bloody obvious. No rocket science here. If they are this obvious, why even write them down. Well, the reason is that, despite how really obvious all of this is, I watch projects not do them all the time. Most of the time, in fact.

So, stay tuned. I am going to try to post one law per day (or so) until I run out of ideas.

BTW, as a little footnote, I have been involved in a few successful projects along the way. It just always seems to be the ones that failed (and failed spectacularly) that stick out in my memory.

A Picture of the Multicore Crisis -> Moore’s Law and Software

I was reading A Picture of the Multicore Crisis, and got to thinking of something which has bothered me for a long time. This issue is related to Moore’s Law and the growth of processing capacity (whether through raw clock speed, or the multicore approach, or magic and hampsters). Looking at the last 10 years or so, we probably have something like 10-20 times the processing power we had 10 years ago.

As a producer of server-side software, a user of server software, etc., it makes me wonder – why are my servers (document management, document production, and many others) not providing a corresponding increase in throughput? Why do many server systems maintain the same performance over time, or offer only marginal improvements?

(I leave aside client side performance for now, because on the client side much of the performance improvements have shown up in different ways, such as new capabilities like multimedia, prettier graphics in the UI, the ability to multitask and keep 10 different applications open at the same time).

So, why are my servers not 10 times as fast as they were? I can think of a few reasons:

  1. As has been discussed in other places, the shift from clock-speed-driven improvements to a multicore approach has had an impact. Much software, especially older software, is not written in a way which takes advantage of multiple processors. And often, re-engineering this software to better use multiple processors is non-trivial, especially when you have to worry about things like backwards compatibility and supporting a large number of customers, finding time to add the new features product management wants, etc. Very few of us can afford to divert a significant group of our development resources for an extended period of time, and it is frequently hard to justify from a business perspective.
  2. Even if your software is architected for multiple processors, oftent he algorithm is inherently “single threaded” in places, which throttles the whole process.
  3. Also, even if you are well architected for multiple processors, this does not come for free. The overhead introduced in managing this algorithm can easily consume a non-trivial portion of your processor gains.
  4. Even excluding the shift to multicore, much software has not kept up with performance improvement provided through pure clock speed. There are a number of reasons for this:
    • We are frequently very feature driven. The desire to compete, expand and grow often leads us to add features to existing software at an alarming rate. Wile this is necessary from a business perspective, often the addition of these new features slows down the software faster than the hardware speeds it up. Note, this is why I think it is very important to be architect software so as to be able to isolate “core” processing from “features”. This way, features can be removed from the configuration when not needed, and not allowed to impede performance. Also, this is why it is important in each cycle of development on a product to assess whether performance on the same hardware is at least as good.
    • Processing power is not the whole story (yeah, I know, we all know this). Much of our software is not entirely CPU bound. The bottlenecks are often elsewhere. Much of our processing, especially for large documents, is more bound by memory, disk speed, network speed, and dependencies on other systems. Given that, there is only a limited amount of benefit to be gained through pure processor speed.

Service Oriented Architecture is your Ticket to Hell?

With reference to Service Oriented Architecture is your Ticket to Hell, it always amuses me how people insist on calling any idea which does not agree with their own, “bullshit” – always thinking in terms of absolutes, and believing “my ideas are great, yours are BS”. Remember, an idea is a dangerous thing when it is the only one you’ve got. The statement that Service Oriented Architecture (SOA) increases agility can be interpreted in two ways: as increasing the agility of your architecture, or as increasing your ability to adhere to the dogma of “agile development” (which has been bastardized as much as all dogma ultimately is).

(of course, I tend to think of SOA in the dogmatic view of Erl as somewhat bastardized as well, and I do not recognize his authority on the subject as absolute. I was modeling systems as collections of autonomous interacting objects/services years before the term was hijacked)

I will start by looking at the closing statement of the post, since I actually agree with it:

What I am saying is that, if SOA is scaled up without precaution, it can create systems so precarious that anyone asked to maintain them will feel like s/he’s won a ticket to programmer hell.

While I agree with this statement, I do not agree with specifically targeting SOA. This statement applies equally well to any architectural model, including any emergent architecture coming out of an agile development project.

Lets now look at the two specific concerns expressed with SOA.

It is not entirely clear to me that SOA requires excessive amounts of “up front” architecture. The only locked in architectural decision is the one to model your system as a system of interacting services. Even the choice of what kind of a service bus to use should not imply lock-in, since if you implement things properly, it is not particularly onerous to move services from one context to another. And the decision to model your system as a collection of loosely coupled services does increase the agility of your project, in some respects. Need an additional execution component? It is fairly easy to implement it without disrupting the rest of the system. Need to take one out, or change its implementation? Same thing.

Looking at the second concern, I would agree that is possible to create “strange loops” and other architectural oddities through unconstrained application of service oriented architectures. The same was said for a long time about inheritance dependencies in object oriented systems.  It remains important for the architect of the system itself to understand the implications of any services being used. This is an inherent complexity of large, complex, distributed systems.

(as an aside, this is a fundamental problem I have with agile methodologies – the idea that up front architecture is sacrilege – and I have seen little to no evidence the agile methodologies scale to large, complex projects).

As for the comparison between object oriented approaches and SOA, I do not see the two approaches as being mutually exclusive. What are services but large scale objects which respond to messages and provide a service/behaviour? Much of the same modeling concepts which apply to OOAD also apply at the larger scale (of course some do not – such as granularity of operations).

Ultimately, I find SOA to be a useful approach to modeling large, complex distributed systems (and yes, I have built a few). Is it perfect? Probably not. Are the “gothcha’s” in there if you apply it blindly, and without due thought? Absolutely – the same as any other approach I have seen. Is it the correct approach for every system and every project? Absolutely not. It is one approach. It pays to know more than one, and to use the correct one in the correct situation.

What Microsoft Doesn’t Want You to Know about WPF

Looking at Eric Sink’s post What Microsoft Doesn’t Want You to Know about WPF – gee, I thought I was the only person who coded on vacation (at least that is what my wife tells me).

Anyway, I agree with the observation that “beautiful” is definitely not the default for WPF – certainly not until Microsoft’s toolset catches up. Maybe then beautiful will be the default, or at least a selectable option.

I guess the point, though, is that WPF is supposed to let you separate design from coding, and enable you to let designers design, and programmers program. I have never actually seen this work in the real world, but I am forever hopeful. The fact is, though, that no technology or tool is going to protect you from creating ugly designs – the same as using the right language will not guarantee you will not produce bad code, and having the right process does not guarantee that your project will be a success. All it does is improve your odds a little. Maybe. if you are lucky.