Usability: Are “Stupid Users” really just a symptom of lazy software?

Any conversation with programmers or technical support people regarding users will often lead to many stories about “can you believe how stupid users are?” But how often is it really the software that is stupid, rather than the users?

Users frequently make some very simplistic assumptions about software (or computerized devices in general):

  • Simple things will work
  • If it lets me do it, everything must be ok.

These are not really bad assumptions. Many of the things the mere users try to do only sound stupid to those “in the know” – those who have been suitably trained and conditioned by software to know that the perfectly reasonable things the user wants to do are indeed stupid.

Take an example. A user has an MP3 file, and they really want a WAV file. Naively, the user renames the file from a .mp3 extension to a .wav extension, and is baffled that the file does not behave as a WAV file. We all know that this is not how software works, right? This user then becomes another story for some tech support person.

However, there was nothing wrong with the user. The user wanted a WAV. The OS let him rename the file from .mp3 to .wav, so everything must be ok, right?

I would suggest that it is the software here that is stupid, not the user. Or more correctly, the software is just lazy. It cannot be bothered preventing the user from doing things that don’t make sense. It cannot be bothered acting in an intuitive manner, or at least informing the user that it is not acting so. Hey, maybe the software could actually do something useful, like convert the MP3 file to a WAV file, which is what the user wants in the first place. Or at the very least, tell the user how to do it.

In general, users are not stupid. They just want to do stuff, and they expect software to allow them to do it in an intuitive manner. So if your tech support logs are filled with stories of “stupid users”, maybe you should have a long, hard look at your software.

Don’t hide or disable menu items?

I wholeheartedly disagree with this over on Joel on Software.

Actually, I agree with not hiding functionality, but nothing (including menu items) should be enabled in the UI if it is not possible to perform that function. That is not to say developers should be lazy – don’t just disable things because it is inconvenient for you (the developer) to let them do it. If it is reasonable, leave it enabled, and lead the user through what they need to do to perform the task.

However, there are things in most programs which you really cannot do at a certain point in time, and that should be clear to the user, along with why it is not possible, and how to proceed. The user should never be left at a dead end. On the same not, however, the user should never be led to believe something is possible, only to be denied.

As I write this, I figure I do not wholeheartedly disagree, but I do disagree – like most broad, generalized statements,  it is wrong, or at least not entirely right.

Thoughts in the Middle of the Night

I am just coming off an all-nighter – it has been a long time since I got so wrapped up in coding that I worked all night.

After I got to tired to code effectively, I got reading some blogs and thinking on various topics. One the things I was thinking about (obviously not for the first time) is the whole open source software movement. As always, there is a fair amount rhetoric out there regarding the superiority of open source software, the TCO of OSS applications, the advantages of development under the open source model, etc., and even conjecture about the ultimate demise of all non-OSS development.

A number of questions have always nagged at me about the claims of OSS:

  1. Believers frequently claim that OSS produces better software, with “better” defined in various ways – fewer defects, better functionality, more secure, etc. Is there empirical data to support this on a broad scale? Yes, there are examples frequently given, but usually it is a comparison of one or more highly successful OSS project against one or more bad examples of commercial, closed-source applications. Is there any broad, unbiased comparison of large numbers of OSS projects to large number of non-OSS projects?
  2. Similarly, Believers often claim that the process of open source development is much more efficient, effective, and innovative that its non-OSS counterparts. Again, OSS success stories are frequently compared to horror stories form the non-OSS world. Is there any large scale, unbiased comparison out there? For example, it is often quoted the a very large percentage of software projects are late, over-budget, or complete failures. Is the open source world any better? People always talk about the successes of OSS, but take a browse around SourceForge some time – there are a huge number of projects there that are never completed, never deliver anything, never get past Alpha, etc. The OSS statistics always seem to be somewhat selective.
  3. Many people predict the demise of closed-source development (and have for a long time). Are there any clear statistics out there as to the number of developers working on OSS versus non-OSS development (I know, many do both). Or is there information as to the economic force of OSS versus non-OSS – how much economic activity in the IT world is driven by OSS?

I don’t have answers to any of these right now – just some thoughts which occurred to me through the night – hopefully I will have time to dig deeper into this over the next while.

Is Software High Tech? If not is it a Commodity?

I was reading Is Software High Tech? If not is it a Commodity? « Tech IT Easy. It struck me that the question is not entirely meaningful. I agree with the statement “software by itself is no longer high-tech.”

However, the same question may be asked of many other aspects of technology. Take electronics, for example. There is no denying that there is a great deal of electronics which is obviously “high tech”, but being electronic is not, by itself, is not enough to make something high tech. Is a transistor radio high tech?

In the same way, there are many, many kinds of software out there which are decidedly not high tech (including much of the web). This is not to say they are not innovative – being innovative is about much more than the technology.

Fred’s Laws – How not to write software

This begins a series of posts on Fred’s Laws – basically a set of anti-rules on how not to develop software.

Over the past twenty-odd years, I have seen a lot of software projects crash and burn. Many have been doomed from the start, while many others died slow, painful deaths after hopeful beginnings. Some have finished, and the systems are in production, without ever having realized that the project was a failure. Others should have failed, but managed to struggle through due to the heroic efforts of one or more dedicated (and usually really smart) people.

I have also seen more than a few “failed” projects that were technical successes. We built really cool software. We were on time, on budget, and had good quality. They failed in some other aspect – usually they were business failures for one reason or another.

The environments in which these projects have died have been varied as well. Some tried to make it with no process at all. Some had lots and lots and lots (and lots and lots) of process. I have not seen a great deal of correlation between process and success (well, except that the process I pick for my projects is always successful 😉 ).

When I look back on these catastrophic projects, usually I can see where things went wrong. In fact, most of the time I could see where they were going wrong while it was happening, like watching a car crash in slow motion, but was frequently powerless to avoid the impact. More often than not (in fact, I would be willing to say always), the root cause was something completely avoidable (from a technical or project perspective). Never was it because we chose Windows over Linux (or vice versa), nor because of the programming language we chose, nor because what we set out to do was technically impossible.

As I have written Fred’s Laws (well, written them in my head, none of them are actually written yet!) it occurs to me that they all seem to be straight from the department of the bloody obvious. No rocket science here. If they are this obvious, why even write them down. Well, the reason is that, despite how really obvious all of this is, I watch projects not do them all the time. Most of the time, in fact.

So, stay tuned. I am going to try to post one law per day (or so) until I run out of ideas.

BTW, as a little footnote, I have been involved in a few successful projects along the way. It just always seems to be the ones that failed (and failed spectacularly) that stick out in my memory.

A Picture of the Multicore Crisis -> Moore’s Law and Software

I was reading A Picture of the Multicore Crisis, and got to thinking of something which has bothered me for a long time. This issue is related to Moore’s Law and the growth of processing capacity (whether through raw clock speed, or the multicore approach, or magic and hampsters). Looking at the last 10 years or so, we probably have something like 10-20 times the processing power we had 10 years ago.

As a producer of server-side software, a user of server software, etc., it makes me wonder – why are my servers (document management, document production, and many others) not providing a corresponding increase in throughput? Why do many server systems maintain the same performance over time, or offer only marginal improvements?

(I leave aside client side performance for now, because on the client side much of the performance improvements have shown up in different ways, such as new capabilities like multimedia, prettier graphics in the UI, the ability to multitask and keep 10 different applications open at the same time).

So, why are my servers not 10 times as fast as they were? I can think of a few reasons:

  1. As has been discussed in other places, the shift from clock-speed-driven improvements to a multicore approach has had an impact. Much software, especially older software, is not written in a way which takes advantage of multiple processors. And often, re-engineering this software to better use multiple processors is non-trivial, especially when you have to worry about things like backwards compatibility and supporting a large number of customers, finding time to add the new features product management wants, etc. Very few of us can afford to divert a significant group of our development resources for an extended period of time, and it is frequently hard to justify from a business perspective.
  2. Even if your software is architected for multiple processors, oftent he algorithm is inherently “single threaded” in places, which throttles the whole process.
  3. Also, even if you are well architected for multiple processors, this does not come for free. The overhead introduced in managing this algorithm can easily consume a non-trivial portion of your processor gains.
  4. Even excluding the shift to multicore, much software has not kept up with performance improvement provided through pure clock speed. There are a number of reasons for this:
    • We are frequently very feature driven. The desire to compete, expand and grow often leads us to add features to existing software at an alarming rate. Wile this is necessary from a business perspective, often the addition of these new features slows down the software faster than the hardware speeds it up. Note, this is why I think it is very important to be architect software so as to be able to isolate “core” processing from “features”. This way, features can be removed from the configuration when not needed, and not allowed to impede performance. Also, this is why it is important in each cycle of development on a product to assess whether performance on the same hardware is at least as good.
    • Processing power is not the whole story (yeah, I know, we all know this). Much of our software is not entirely CPU bound. The bottlenecks are often elsewhere. Much of our processing, especially for large documents, is more bound by memory, disk speed, network speed, and dependencies on other systems. Given that, there is only a limited amount of benefit to be gained through pure processor speed.

Service Oriented Architecture is your Ticket to Hell?

With reference to Service Oriented Architecture is your Ticket to Hell, it always amuses me how people insist on calling any idea which does not agree with their own, “bullshit” – always thinking in terms of absolutes, and believing “my ideas are great, yours are BS”. Remember, an idea is a dangerous thing when it is the only one you’ve got. The statement that Service Oriented Architecture (SOA) increases agility can be interpreted in two ways: as increasing the agility of your architecture, or as increasing your ability to adhere to the dogma of “agile development” (which has been bastardized as much as all dogma ultimately is).

(of course, I tend to think of SOA in the dogmatic view of Erl as somewhat bastardized as well, and I do not recognize his authority on the subject as absolute. I was modeling systems as collections of autonomous interacting objects/services years before the term was hijacked)

I will start by looking at the closing statement of the post, since I actually agree with it:

What I am saying is that, if SOA is scaled up without precaution, it can create systems so precarious that anyone asked to maintain them will feel like s/he’s won a ticket to programmer hell.

While I agree with this statement, I do not agree with specifically targeting SOA. This statement applies equally well to any architectural model, including any emergent architecture coming out of an agile development project.

Lets now look at the two specific concerns expressed with SOA.

It is not entirely clear to me that SOA requires excessive amounts of “up front” architecture. The only locked in architectural decision is the one to model your system as a system of interacting services. Even the choice of what kind of a service bus to use should not imply lock-in, since if you implement things properly, it is not particularly onerous to move services from one context to another. And the decision to model your system as a collection of loosely coupled services does increase the agility of your project, in some respects. Need an additional execution component? It is fairly easy to implement it without disrupting the rest of the system. Need to take one out, or change its implementation? Same thing.

Looking at the second concern, I would agree that is possible to create “strange loops” and other architectural oddities through unconstrained application of service oriented architectures. The same was said for a long time about inheritance dependencies in object oriented systems.  It remains important for the architect of the system itself to understand the implications of any services being used. This is an inherent complexity of large, complex, distributed systems.

(as an aside, this is a fundamental problem I have with agile methodologies – the idea that up front architecture is sacrilege – and I have seen little to no evidence the agile methodologies scale to large, complex projects).

As for the comparison between object oriented approaches and SOA, I do not see the two approaches as being mutually exclusive. What are services but large scale objects which respond to messages and provide a service/behaviour? Much of the same modeling concepts which apply to OOAD also apply at the larger scale (of course some do not – such as granularity of operations).

Ultimately, I find SOA to be a useful approach to modeling large, complex distributed systems (and yes, I have built a few). Is it perfect? Probably not. Are the “gothcha’s” in there if you apply it blindly, and without due thought? Absolutely – the same as any other approach I have seen. Is it the correct approach for every system and every project? Absolutely not. It is one approach. It pays to know more than one, and to use the correct one in the correct situation.

What Microsoft Doesn’t Want You to Know about WPF

Looking at Eric Sink’s post What Microsoft Doesn’t Want You to Know about WPF – gee, I thought I was the only person who coded on vacation (at least that is what my wife tells me).

Anyway, I agree with the observation that “beautiful” is definitely not the default for WPF – certainly not until Microsoft’s toolset catches up. Maybe then beautiful will be the default, or at least a selectable option.

I guess the point, though, is that WPF is supposed to let you separate design from coding, and enable you to let designers design, and programmers program. I have never actually seen this work in the real world, but I am forever hopeful. The fact is, though, that no technology or tool is going to protect you from creating ugly designs – the same as using the right language will not guarantee you will not produce bad code, and having the right process does not guarantee that your project will be a success. All it does is improve your odds a little. Maybe. if you are lucky.