Well, this series will come, but I have decided for various reasons to wait a few weeks to start posting it.
Stay tuned.
Technology, Innovation, Life and other Random Thoughts from Fred
Well, this series will come, but I have decided for various reasons to wait a few weeks to start posting it.
Stay tuned.
Well, I am off to a good start. I was not counting on some issues around moving to my new laptop.
All sorted out now, though. Hopefully tonight I will have Part 1 of this posted.
This begins a series of posts on Fred’s Laws – basically a set of anti-rules on how not to develop software.
Over the past twenty-odd years, I have seen a lot of software projects crash and burn. Many have been doomed from the start, while many others died slow, painful deaths after hopeful beginnings. Some have finished, and the systems are in production, without ever having realized that the project was a failure. Others should have failed, but managed to struggle through due to the heroic efforts of one or more dedicated (and usually really smart) people.
I have also seen more than a few “failed” projects that were technical successes. We built really cool software. We were on time, on budget, and had good quality. They failed in some other aspect – usually they were business failures for one reason or another.
The environments in which these projects have died have been varied as well. Some tried to make it with no process at all. Some had lots and lots and lots (and lots and lots) of process. I have not seen a great deal of correlation between process and success (well, except that the process I pick for my projects is always successful 😉 ).
When I look back on these catastrophic projects, usually I can see where things went wrong. In fact, most of the time I could see where they were going wrong while it was happening, like watching a car crash in slow motion, but was frequently powerless to avoid the impact. More often than not (in fact, I would be willing to say always), the root cause was something completely avoidable (from a technical or project perspective). Never was it because we chose Windows over Linux (or vice versa), nor because of the programming language we chose, nor because what we set out to do was technically impossible.
As I have written Fred’s Laws (well, written them in my head, none of them are actually written yet!) it occurs to me that they all seem to be straight from the department of the bloody obvious. No rocket science here. If they are this obvious, why even write them down. Well, the reason is that, despite how really obvious all of this is, I watch projects not do them all the time. Most of the time, in fact.
So, stay tuned. I am going to try to post one law per day (or so) until I run out of ideas.
BTW, as a little footnote, I have been involved in a few successful projects along the way. It just always seems to be the ones that failed (and failed spectacularly) that stick out in my memory.
With reference to Service Oriented Architecture is your Ticket to Hell, it always amuses me how people insist on calling any idea which does not agree with their own, “bullshit” – always thinking in terms of absolutes, and believing “my ideas are great, yours are BS”. Remember, an idea is a dangerous thing when it is the only one you’ve got. The statement that Service Oriented Architecture (SOA) increases agility can be interpreted in two ways: as increasing the agility of your architecture, or as increasing your ability to adhere to the dogma of “agile development” (which has been bastardized as much as all dogma ultimately is).
(of course, I tend to think of SOA in the dogmatic view of Erl as somewhat bastardized as well, and I do not recognize his authority on the subject as absolute. I was modeling systems as collections of autonomous interacting objects/services years before the term was hijacked)
I will start by looking at the closing statement of the post, since I actually agree with it:
What I am saying is that, if SOA is scaled up without precaution, it can create systems so precarious that anyone asked to maintain them will feel like s/he’s won a ticket to programmer hell.
While I agree with this statement, I do not agree with specifically targeting SOA. This statement applies equally well to any architectural model, including any emergent architecture coming out of an agile development project.
Lets now look at the two specific concerns expressed with SOA.
It is not entirely clear to me that SOA requires excessive amounts of “up front” architecture. The only locked in architectural decision is the one to model your system as a system of interacting services. Even the choice of what kind of a service bus to use should not imply lock-in, since if you implement things properly, it is not particularly onerous to move services from one context to another. And the decision to model your system as a collection of loosely coupled services does increase the agility of your project, in some respects. Need an additional execution component? It is fairly easy to implement it without disrupting the rest of the system. Need to take one out, or change its implementation? Same thing.
Looking at the second concern, I would agree that is possible to create “strange loops” and other architectural oddities through unconstrained application of service oriented architectures. The same was said for a long time about inheritance dependencies in object oriented systems. It remains important for the architect of the system itself to understand the implications of any services being used. This is an inherent complexity of large, complex, distributed systems.
(as an aside, this is a fundamental problem I have with agile methodologies – the idea that up front architecture is sacrilege – and I have seen little to no evidence the agile methodologies scale to large, complex projects).
As for the comparison between object oriented approaches and SOA, I do not see the two approaches as being mutually exclusive. What are services but large scale objects which respond to messages and provide a service/behaviour? Much of the same modeling concepts which apply to OOAD also apply at the larger scale (of course some do not – such as granularity of operations).
Ultimately, I find SOA to be a useful approach to modeling large, complex distributed systems (and yes, I have built a few). Is it perfect? Probably not. Are the “gothcha’s” in there if you apply it blindly, and without due thought? Absolutely – the same as any other approach I have seen. Is it the correct approach for every system and every project? Absolutely not. It is one approach. It pays to know more than one, and to use the correct one in the correct situation.
Wille Faler has written an interesitng post Why IT Executives aren’t embracing Agile, referring in turn to another post on the same subject. Given my background, and my current role, I think I can comment on a technology executive’s opinion of agile processes.
Over the years I have worked on projects using a wide range of processes. Back in the eighties I worked on a team of very bright scientists, writing software primarily for their own use. We had almost no real development process (at best it was managed chaos). This was also one of the most successful software teams of which I have ever been a part. I do not think this is repeatable in most software development environments, because that particular environment had a number of unique characteristics:
Shortly after that, I was was involved in a large military project (10 years, billions of dollars, hundreds of thousands of requirements). Needless to say, we had plenty of process. This was the epitome of the heavy process. Between the company I worked for, and the many subcontracting organizations, I was exposed to many flavours of software process (all of them heavy). I was also involved in ISO 9000 certificatiom programs, CMM assessments, 6-sigma programs and Design for Manufacturability programs (we did hardware, too). One of the things I learned in all of that was that you can have all the process in the world, and still fail. While having a strong software development process (whether it is heavy, agile, or otherwise) may vastly increase your chances of success, it by no mean guarantees it.
In the past 10 years, I have become a great proponent of “just enough process” – trying to take what I have learned from the heavy processes on the military projects, and apply what makes sense in a small, product-oriented environment, while leaving much of the “weight” behind. In the period from about 1998 through 2002 (the last time I directly managed development projects) I was greatly impressed with agile processes. While we never fanatically applied any of the agile methodologies, we did adopt many aspects, such as user stories, iterative incremental development, and test driven development. Some aspect just did not fit our environment (such as pair-programming). We had a fair amount of success using this approach, and many aspects of agile development are still in use.
Getting back to the topic at hand (why IT executives do not embrace Agile processes), from my perspective, agile processes are definitely viable and advantageous in certain contexts. Also, “heavy” processes certainly do not guarantee success. My feeling is that there is a time and place for both kinds of process. As in most things, it is important to have a number of tools at your disposal, and to have the knowledge of when it is appropriate to use these tools. Remember, an idea is a dangerous thing when it is the only one you have (didn’t I use that a couple of days ago?).
For example, I think it is entirely innappropriate to use “heavy” processes in small, commercial product development. Similarly, as an IT executive, I would be extremely hesitant to use an Agile process on a large, complex development project, because I have not seen sufficient evidence of the viability of the approach.
It all comes down to using the right tools in the right situations.