US vs Canadian Healthcare – a follow up

A few weeks back I posted on my personal experience with US versus Canadian Healthcare

This week, I received the paperwork from my insurance company to sort out the expenses for my adventure in LA. Just to refresh your memory, while in LA I became very sick, and ended up going to an ER at a nearby hospital. While at the ER, I spent 8+ hours in the waiting room, and about 2 actually being treated. A doctor saw and evaluated me. I was monitored for heart rate, BP, etc., had some blood work, and was given an IV for fluids and some meds. I was then given a prescription and discharged.

The cost for this treatment? Just shy of $3500.

No wonder the healthcare system in the US is as screwed up as it is! 

Advertisement

US vs Canadian Healthcare – a story of personal experience

As anyone not in a coma knows, there is a great deal of debate in the US right now about Health Care Reform. During this debate, there are many references to the Canadian Health Care system, typically by Americans who have absolutely no idea what the hell they are talking about – including a former governor of Alaska. It is referred to as “socialized medicine”, and Americans argue that it reduces efficiency, costs the government great sums of money (note that the US government already spends more per capita on health care than the Canadian government does), reduces innovation, has longer wait times, and even leads to people dying while awaiting treatment.

I recently became ill while in Los Angeles for a conference. While being sick is never a fun experience, being diabetic and being sick while travelling in a foreign country by yourself is especially stressful.

However, this gave me an opportunity to experience the US health care system first hand, albeit a little superficially. Also, since my employer provides me with out-of-country health insurance, my experience is from the perspective of someone with health insurance, not someone without. In addition, my opinion of the US health care system is based on a single experience, not a broad sample.

Lets start with my arrival at the ER. I arrived at about 9 PM on a Tuesday evening.The first step was to fill out a little form with basic information – name, address, nature of my complaint. This form is passed through a little hole in the plexiglass partition, and my information is entered into their computer system. I then waited about an hour to see the triage nurse and be prioritized. Between myself, my wife and my kids, I have been at emergency rooms in New Brunswick, Ontario, and Alberta, and do not recall ever waiting more than a small number of minutes to be triaged. It should be noted that the triage process seemed to be mostly a “first in, first out” kind of process – I did not notice anyone being triaged faster based upon the nature of their complaint.

After being triaged, I guess I was ranked fairly low in terms of priority (hey, I was only vomiting up large amounts of blood), because I then sat from about 10 PM Tuesday evening until 4:30 AM Wednesday waiting to see a doctor. Many people came in, were treated, and left before I was seen, but I understand that once you are triaged, priority are based on who is at the most risk. I also understand that I was only seeing the “walk in” side of the ER – there was another whole flow of patients coming in through the ambulance entrance with a fair number of trauma patients. Still, 7 and a half hours of waiting to see a doctor is longer than anything I have seen in the Canadian health care system. And remember, I was at a private hospital in LA, not a public clinic. I would thus expect that this was on the good side with respect to performance.

Once I actually got to see the doctor, I was treated fairly quickly. Note that the goal was not to treat the root cause in my ailment, the primary intent was to stabilize my condition so that I could return to Canada for full treatment. At this, they were very efficient, and I was out in about 3 hours. It was also made much more smoothly because my out-of-country health coverage worked very well with the hospital’s admissions/accounting people with regards to payment. God only knows how the experience would have played out had I not had insurance.

In short, my visit to the ER in Los Angeles involved wait times which were significantly longer (for both triage and treatment) than anything I have ever experienced at a hospital in Canada.

To finish off the story, I will describe my follow-up treatment after returning to Canada. On the Wednesday following my return to Canada, I called my family doctor, and got an appointment to see her that afternoon. After that appointment, she referred me to a GI specialist, who I saw the next afternoon. He decided I needed an endoscopy, which happened the next day. Seems pretty efficient to me!

Perhaps Americans (especially American citizens) should educate themselves on the reality of the Canadian Health Care System rather than blindly believing the rhetoric of their politicians who are bought and paid for by the insurance companies and HMOs, or simply know nothing about the Canadian system which they are criticizing.

IE8 Slow Opening New Tab/Window

I have had a problem ofver the last few weeks with IE8 (running on the Windows 7 RC). Suddenly, any time I opened a new tab or new window (including initial startup, opening a blank tab, or openig a link in a new tab or window) became extremely slow. I am talking 10-30 seconds just to open a blank tab. It would sit there saying “Connecting”. What the heck is a blank tab connecting to for 30 seconds????

Finally, this morning, I got irritated enough to look for a solution.

After a little digging on the web, I found several references to similar problems which seemed to be related to particular browser add-ons. Unfortunately, I do not have any of the add-ons from any of the discussions I found. It did seem to indicate an add-on could be the problem, however. So, I decided to just work through it the hard way – by trial and error. I opened the IE8 add-on manager (Tools | Manage Add-ons) and disabled all of the add-ons listed. I closed the dialog and created a new tab – and voila, opened in under a second. I then closed the browser and re-launched it. All of my tabs opened almost instantly.

So now I just had to figure out which of the add-ons was causing the problem. Fortunately, I do not have many add-ons:

Add-ons List
Add-ons List

As it turns out, as soon as I enabled the first one on the list (Java plugin helper), the slowness returned. Just for good measure, I went through and enabled each of the other add-ons individually, and none of the them caused any performance change.

So, now the Java plugin helper is disabled, all the others enabled, and all is good. When I get around to it, I will look and see if there is a fix for this plugin.

Makes me wonder, though, how something as fundamental as the Java plugin could be causing this problem, with no one screaming about it. Is it just me?

That’s why you play the game!

I just finished watching the Giants beat the Patriots in the Superbowl. I had not actually intended to watch the game, because I really had little interest in who won. Then I figured, no matter who won, a certain amount of sports history would be made.

Going into this game, no one (myself included) gave the Giants much chance of winning. Up until a few weeks ago, no one would have guessed that they would even be playing. That brings me to the point of this post – the fact that stats really are irrelevant. On any given day, any team can win. That is why they play the game.

This carries over into the “real” world. Whenever you are starting something new – whether it is a business, or a new innovation, or anything else you can think of – there will always be lots of people telling you not to play in certain games because there is no chance of winning. The fact is, there is almost always some chance. It may be slim – but what it comes down is whether you execute better than the other players on game day (only in the real world, everyday is game day).

So do not always run away from the game because there are players out there with better records and better stats. All you have to do is go out and play better.

Easy, right?

Five easy ways to fail?

Ok, so I just read Five easy ways to fail, which itself is just a quote from his article on Inc. Magazine. While I usually find Joel’s stuff intelligent, even when I do not agree with it, and I actually agree with much of the article, the piece quoted on his blog is one of the most mind-numbingly stupid statements I have ever heard outside of a political speech.

“Even though a bad team of developers tends to be the No. 1 cause of software project failures…”

I have never seen any statistics which support this statement. In 20+ years, I have never been part of a project (either as a member or as an observer) which would support this statement. I have been involved in projects where stellar teams overcame bad management, bad scheduling and many other common obstacles, but never have I seen a well-managed, well-thought-out project fail because the programmers just were not smart enough. I would challenge Joel to provide any evidence to support this.

Then again, I have never seen anyone stupid enough to have hired an entire team of stupid people, and then been stupid enough to keep them. If this is the case, you have a much more serious problem than dumb programmers.

Also, while it would be nice to have the luxury of hiring only exactly the developers who fit your profile, that is a luxury most of us do not have (see my previous post on hiring). The reality is that you are almost always going to have a distribution of talents on your team – you are going to have stars, you are going to have duds, and you are going to have everything in between. I am always guided by an article I read in Harvard Business Review many years ago, where the late Bill Walsh talked about building great teams. The basic idea was that in any team of ten people, you are typically going have 2 people who are so good, they are going to over-achieve no matter what you do. You will also likely have 2 people who will under-achieve no matter what. The six in the middle may under-achieve or over-achieve, depending upon how they are led. And the deciding factor as to whether you have a stellar team, or a failing team, depends upon how those six in the middle are guided/managed/coached/led.

To say that most projects fail because the team is not competent is not statistically supported, is overly general in the extreme, and smacks of the kind of statement bad managers make to cover the fact that they are bad managers.

A Picture of the Multicore Crisis -> Moore’s Law and Software

I was reading A Picture of the Multicore Crisis, and got to thinking of something which has bothered me for a long time. This issue is related to Moore’s Law and the growth of processing capacity (whether through raw clock speed, or the multicore approach, or magic and hampsters). Looking at the last 10 years or so, we probably have something like 10-20 times the processing power we had 10 years ago.

As a producer of server-side software, a user of server software, etc., it makes me wonder – why are my servers (document management, document production, and many others) not providing a corresponding increase in throughput? Why do many server systems maintain the same performance over time, or offer only marginal improvements?

(I leave aside client side performance for now, because on the client side much of the performance improvements have shown up in different ways, such as new capabilities like multimedia, prettier graphics in the UI, the ability to multitask and keep 10 different applications open at the same time).

So, why are my servers not 10 times as fast as they were? I can think of a few reasons:

  1. As has been discussed in other places, the shift from clock-speed-driven improvements to a multicore approach has had an impact. Much software, especially older software, is not written in a way which takes advantage of multiple processors. And often, re-engineering this software to better use multiple processors is non-trivial, especially when you have to worry about things like backwards compatibility and supporting a large number of customers, finding time to add the new features product management wants, etc. Very few of us can afford to divert a significant group of our development resources for an extended period of time, and it is frequently hard to justify from a business perspective.
  2. Even if your software is architected for multiple processors, oftent he algorithm is inherently “single threaded” in places, which throttles the whole process.
  3. Also, even if you are well architected for multiple processors, this does not come for free. The overhead introduced in managing this algorithm can easily consume a non-trivial portion of your processor gains.
  4. Even excluding the shift to multicore, much software has not kept up with performance improvement provided through pure clock speed. There are a number of reasons for this:
    • We are frequently very feature driven. The desire to compete, expand and grow often leads us to add features to existing software at an alarming rate. Wile this is necessary from a business perspective, often the addition of these new features slows down the software faster than the hardware speeds it up. Note, this is why I think it is very important to be architect software so as to be able to isolate “core” processing from “features”. This way, features can be removed from the configuration when not needed, and not allowed to impede performance. Also, this is why it is important in each cycle of development on a product to assess whether performance on the same hardware is at least as good.
    • Processing power is not the whole story (yeah, I know, we all know this). Much of our software is not entirely CPU bound. The bottlenecks are often elsewhere. Much of our processing, especially for large documents, is more bound by memory, disk speed, network speed, and dependencies on other systems. Given that, there is only a limited amount of benefit to be gained through pure processor speed.
%d bloggers like this: