Welcome to The Continuum – Part Two

Earlier today, I began to explain The Continuum as an experiment in Social Brainstorming. But that is only half the story (actually, a third, but we will deal with that later).

Beyond this, The Continuum is meant as a demonstration of a Seamless User Experience.

The Continuum grew out of a very simple exercise in which I was brainstorming a new (for me) subject area. While reading about this topic, I was recording (short) thoughts on PostIt notes, and putting them randomly all over the whiteboards in my office. I was doing this in the hope that patterns would eventually emerge – patterns I would not otherwise see.

While I was doing this, someone came into my office, and over the course of our discussions, the question arose as to why I was not using some computer-based tool to do this (I am, after all, a nerd). The reality is, unfortunately, that no tools exist which would allow me to do this without the technology getting in the way. Any computer-based tool tends to make assumptions about how you work, or worse yet force a pattern of work on you. Or you spend more time playing with the tool than you do capturing ideas. This cognitive friction in software means that I tend to lose ideas while trying to capture them, or at least lose the flow of ideas.

It should all be as simple as scribbling on a PostIt note, and slapping it on a whiteboard.

But it isn’t.

We now live in a world dominated by mobile devices. That said, there are still a few (hundred million) PCs in use. Even more, there are now many large format displays offering rich multi-touch experiences, as well as other modes of interaction including gestures and voice recognition.

The question then arises “What constitutes a great user experience in this new world of multi-modal interactions?” This is often described in terms of a Natural User Interface (NUI), which is unfortunately defined somewhat circularly as an interface which feels natural (ok, not quite that obviously, but nearly).

While this is a question I have been pondering for some time, I do not have an answer, or at very least not the answer (if I did, I would be a lot richer and more famous than I am!)

One aspect of the new user experience that is key to The Continuum experiment is that the user experience should be seamless across all (or at least most) devices. Note that this does not mean that all devices should deliver all of the functionality of the solution. What it does mean is that the solution should exist on all devices, presenting those aspects of the functionality which is appropriate to the device format. Let’s call this Device Appropriateness.

In addition, the user interface should be as transparent as possible. As much as possible, the user should interact directly with content, rather than interacting with content through some artificial UI constructs. Buttons, menus, icons – these are all artificial UI constructs. In a perfect world the UI is completely disappears.

Device Appropriateness.

Cognitive Transparency.

This is The Continuum.

Advertisement

Welcome to The Continuum

So what is The Continuum? Well, at one point Continuum was what we called our solution because all of the names we really wanted to use were taken by other things.

Since I came up with the name, however, I have realized that Continuum really fits what I am trying to do better than any of the other names we had considered. Maybe it was just my subconscious trying to tell me something!

Firstly, The Continuum is an experiment in Social Brainstorming.

But wait, isn’t all brainstorming, by its very nature, social? It is, but in a very limited context. Generally, you and a few others are locked in a room for an hour, or an afternoon, or maybe a day, and asked to be spontaneous brilliant. Maybe there is a facilitator, and maybe even a process, or a game, or something else to help you be brilliant.

Unfortunately, this is not how the brain works. People are not brilliant-on-demand. Yes, some new and interesting ideas arise from these sessions. But more often than not, a few hours or days later, you come up with ideas you wish you had during the brainstorming session. Even if you email the facilitator with your new idea and make sure it gets in the results, you have lost that potential for your idea to trigger other ideas from your colleagues. The value of the group collaboration is lost. For this, and other reason, many thought leaders have come to conclude that group brainstorming is useless.

Enter The Continuum.

Imagine a brainstorming session that is not constrained to a short time-window, or a single location, or a small, defined group of people.

Imagine a whiteboard covered in sticky notes, but visible to users across the organization, or across the web.

Imagine being able to release a question or idea into the cloud (or at least a private cloud), and allow anyone, anywhere in the organization to contribute ideas, look at the collection of notes in a visually stimulating way, to analyse and cluster the notes and share those results.

Imagine being able to participate in this process from anywhere, at any time, using almost any device?

Imagine that all this is as simple as scribbling on a PostIt note and slapping in on a wall, or rearranging notes on a whiteboard?

This is The Continuum.

Some challenges with MS Surface Development

So I have been playing with the MS Surface for a couple of weeks, and have a pretty good handle on the basics of the development model. As I said previsouly, the nice thing (for me, anyway) is that it is pretty standard .NET stuff. You can do pretty much anything you need to using Windows Presentation Foundation (WPF). That being said, it is not without its challenges, and I would like to share some of what I have seen so far. 

1) The SDK only installs on 32-bit Windows Vista. This is a challenge for me, since my T4G laptop is running XP, and all of my other computers are running 64-bit Windows 7. The big value of the SDK is that it contains a “Surface Simulator” which allows you to experiment with Surface development without actually having a Surface. I tried setting up a 32-bit Vista VM to use for the SDK, but the simulator does not work in the VM. Now the good news, after a couple of weeks of messing around, I managed to hack the .msi file for the SDK, which then allowed me to install on 64-bit Win7. All seems to work great now.  

2) WPF experience is hard to come by. I can program in WPF, and understand how it works, but when it comes to the fancy styling and more creative aspects of what you can do with XAML, I am definitely no expert. Apparently, neither is anyone else I know!

3) Changing the way you think about the user interface. This is the biggy. The UI model for the Surface is different than anything else with which I have worked. yes, it is a multi-touch platform, which is cool, but hardly unique. If all you want to do is develope multi-touch apps, you can do it much more cheaply on a multi-touch PC (both WPF and Silverlight now support multi-touch development on Windows 7). The unique aspects of the Surface are that it is social, immersive, 360-degree, and supports interaction with physical objects. In order to make full use of the Surface platform, you have to think about all of these things. You also have to break old habits regarding how the user interacts with the platform. We are used to menus, text boxes, check boxes, drop downs and all the usual UI components we have lived with for so long in desktop applications. Or the content and navigation models we are used to on the web. The Surface requires us to forget all of that, and think of interaction in a new way. In this sense, it is more like iPhone development. However, even iPhone development gives you a fairly strict environment which defines how your app ahould look. The Surface on the other hand, is wide open. You can create almost any interaction model you can imagine, supporting multiple user working either independantly or collaboratively, working from any or all sides of the screen, with or without physical objects. This requires a whole new way of thinking, at least for me.

4) Ideas. This is another big challenge. I have lots of ideas for applications for the Surface. Some of them I am pretty sure are good. Some of those are even useful. Some of my other ideas are probably downright stupid. I would like to hear your ideas. I have always believed that, the more people you have coming up with ideas, and the more ideas you come up with, the better your chances of finding great ideas. So shoot me email with any or all ideas you might have – and don’t worry, they cannot be any more silly than some of mine!

Finally, I have added a little video showing just how far you can go with the Surface UI. Hopefully in the next couple of days, I will have a video of some of what I am working on to show.

DaVinci (Microsoft Surface Physics Illustrator) from Razorfish – Emerging Experiences on Vimeo.

First Thoughts on Microsoft Surface Development

A brand new Microsoft Surface development unit arrived this week in the Moncton T4G office. As I start to develop some prototypes, I will be doing some related posts, but I wanted to start by talking about the platform a little, and the development environment.

For anyone who has no idea what the surface is, it is a multi-user, multi-touch platform released by Microsoft a couple of years ago. Have a look at this video to see what it can do.

Other the last few weeks, before the unit arrived, I have learned quite a bit about the Surface. The first interesting thing I learned was the the surface is not a touch screen in the sense that your iPhone or multi-touch laptop are. The surface of the Surface is just glass – it is not a capacitative or pressure sensitive material at all. All of the touch behaviours and interactions are based instead on a computer vision system. Inside the box there is a fairly standard PC running Windows Vista, with an DLP projector pushing the image up to the table top. There are also 5 cameras inside the box which perform the actual "vision". These feed into a custom DSP board which analyses the camera feeds into something a little more manageable for the PC. The fact that it is a vision-based system leads to some interesting capabilities, as well as some idiosyncrasies.

When the Surface is running in user mode, the Windows Vista UI is completely suppressed. There are no menus, no windows, and no UAC dialogs – nothing that would indicate it is even running Windows. There is also an Administrator mode which shows a standard Vista UI for administrative functions or for development.   

As far as development goes, the good news is that it is all pretty standard stuff. There are two approaches to programming for the Surface. The first is to use the Microsoft XNA Studio platform, the other is to use Windows Presentation Foundation (WPF). Using XNA gives you a little bit more power, as well as access to more of the "lower level" information like raw images from the video feed. Using WPF is a higher-level programming model, and comes with a set of controls specific to the Surface UI model. The nice thing is that all you know about .NET and WPF programming applies to the surface. And from a larger architectural perspective, Surface can tie into any infrastructure accessible to any other .NET-based model. It is just a different .NET UI layer.

The bigger challenge in developing for the Surface is changing the way we think about the UI, and selecting the right solutions. First and foremost, Surface applications are not just a port of a standard Windows UI. Stop thinking about Windows, Icons, Menus and Pointers (WIMP). The surface calls for a completely different models, one that I am just learning. One of the interesting statement I have read describing the Surface model is "the content is the application."
The Surface is more than just a multi-touch platform. Sure, you could implement a multi-touch solution on the Surface exactly the same as a Windows 7 multi-touch solution, but that is only using a subset of the Surface capabilities. The key characteristics of Surface interaction are:

  • multi-user, multi-touch (up to 52 simultaneous touch points)

  • social interaction – multiple simultaneous users, collaborating or working independently

  • 360 degree user interface – users on all sides of Surface at the same time, with UI oriented to support all of them

  • Natural and immersive – like the physical world, only better

  • Support for physical objects integrated into the experience (tokens, cards, game pieces, merchandise)

When it comes to selecting a solution to deploy on the Surface, the two most important keywords are "social" and "immersive". Social, because the best Surface applications are those in which the computer is not replacing human interaction, it is enhancing it. Immersive, because you want the user(s) to forget that they are using a computer, and to only be thinking about what they want to accomplish. The how should be transparent.

Over the coming days and weeks, I will post more about the Surface and what we are doing with it. Hopefully next week I will be able to post a short video. If you have any thoughts or suggestions, I would love to hear them.

%d bloggers like this: