Enter the maze

Wake Up! Fix it!: Observe

So, you are a usability expert. You've been set the task of creating a new usable design for a hotel alarm. What do you do?

A woman hitting the alarm at 5:15

You have already explored the problem design and some others. As an expert you can do better than just informally exploring existing designs. You can use a method.

Some usability evaluation methods rely on your expertise - the power of your ability to see what the problems will be, your ability to predict the future. We will move to those later. It is always a good idea to take a reality check though, and rather than just trying to predict the future, seeing what people actually do. There are expensive high-tech ways of doing that and quick and dirty simpler ones. For example, if you are interested in hotel alarms, you could wire up some rooms Big Brother style with hidden cameras, and use special versions of the design of the alarm that record all the button presses people do and when. You can then later pour over all the data. The data would be good (assuming you could find volunteers to stay in the wired up rooms who were just normal hotel guests). Trouble is it would take a lot of time and be quite expensive to do. What are the simpler ways?

Lab rats

An alternative would be to do a lab-like experiment. Bring a series of people into a lab, sit them at a table, and get them all to do the same fixed tasks with the gadgets that you are interested in. You just tell them WHAT to achieve, not HOW to do it. For example, you might ask them to tune a radio to a specific station and store it as a "favourite" if evaluating a radio. You don't tell them what buttons to press though. For an alarm you might ask them to set the alarm for 8:30 (or some time in the very near future), then switch it off when it comes on.

You then record each action they take - the buttons they push, the dials they turn. You can either do this by taking notes, or use a camcorder to really record it all. Knowing what the correct sequence of actions is (from the instruction manual that you have but they didn't) you can see the points where they went wrong. Those are points in the interaction design to focus on. Do you need better labels? Should some buttons be bigger? Would a dial be better than a button to make it easier to do?...and so on. If several of your volunteers make similar mistakes then you may have found a serious problem with the design.

Watching people make mistakes only tells you that there are problems, though. It doesn't tell you why they are getting it wrong. Two good ways to enhance this king of method are: post-interviews and think-aloud.

Post Interviews

One way to find out why people make mistakes is to ask them afterwards. You talk them through what they did, or better still play them the video, and ask them why they did it, what were they thinking. You record their comments and go through them afterwards to work out what it means for the design.

Think-aloud

Alternatively have a tape recorder going whilst they are doing the task and ask them to explain what they are doing as they go. This is twice as quick but most people find it quite hard (and a little wierd) to do though. When we are thinking we tend to shut up!

There are some things mistakes people find easiest to explain as they go. Others that people make can only be explained afterwards. Often you do not realise you even made a mistake straight away, so explaining why you did it can be hard.

Want to try out your evaluation skills? Below are some prototypes of different related gadgets to evaluate: mobile radios and alarms.

You can also try it out on existing gadgets from around the house, and once you have created your own prototype, perhaps using powerpoint, you can evaluate them too. You probably won't need too many volunteers before you start spotting problems.

Once you have seen the sort of problems users encounter its time to do some expert analysis.

RCUK logo

The wake-up. Fix it! series of cs4fn articles is based on a Science Week activity organised by the Department of Computer Science at Queen Mary, University of London, with support from the Research Councils UK.