Usability Testing [was: Re: FAQ page]
Matthew Nuzum
matthew.nuzum at canonical.com
Fri Jul 28 16:29:23 UTC 2006
On Fri, 2006-07-28 at 12:00 +0200, Andreas Lloyd wrote:
> Duncan Lithgow wrote:
> > If someone could make a spec with differently graded tasks, I'd be happy
> > to record desktop sessions and get some friends and family to carry out
> > the listed tasks. But writing the tasks and instructions should be done
> > by someone with good understanding.
I can do this. I'll work on it either today or shortly after. I've got a
bit of a backlog...
> At the GNOME conference this year, there was a talk by Novell's
> Usability person, Anna Dirks, about doing your own Usability testing. I
> couldn't find the slides, but there's a truckload of material from
> Novell's usability testing of the GNOME desktop available at the
> http://www.betterdesktop.org website.
> Traditionally, usability testing involves multiple cameras, to record
> not only what happens onscreen but also how the user uses her hands and
> how facial expressions change as she attempts to complete the various
> tasks. This is quite extensive, and generates a lot of data - both
> quantitative (number of successes vs. failures at each task) and
> qualitative (how the users react, their emotions and stress at the given
> situation), as well as general behavioural data - which may be dependent
> on the users' level of experience with similar products.
You've made some good points, however such formalities may be overboard.
The problem with testing is that you get a ton of data. It can be very
difficult to translate the data into action plans.
Keeping this in the scope of testing the website:
# If possible, watch the user as they test. This produces the most
valuable data. Unfortunately data often comes faster than you can write.
# Use vncrec to record the screen of the computer
# Use a microphone or telephone recording setup to record their voice
and encourage them to talk about their experience as they do it (sync
the video and audio for later by having them say click with their first
click)
If a site hasn't been formally tested in the past it won't take long to
figure out where people are getting stuck. In a recent test I performed
it took only 6 testers to identify the three most important tasks needed
to move the site leaps and bounds ahead in usability.
My ideal setup for a low-budget test is two computers, one for the
tester one (probably hidden) to record the vnc session and the audio. A
person to facilitate the testing who makes the tester feel comfortable
and introduces the test. Goals and relevant details (such as login info
if necessary) provided on a piece of paper.
The next best thing would be to simply watch people without using any
recording equipment. You'd be amazed at how useful this is. (but don't
help them... just watch!)
Next best thing would be to do the testing remotely and record the
telephone call.
Before testing you discuss the goals and ask if there's questions. Once
testing begins, the facilitator doesn't answer questions (instead
referring to the sheet of paper). The facilitator should make notes but
don't disturb the tester with your note taking.
I've had no problems using friends and family in testing in the past. I
always start there. For more formal testing I also use other people as
well. I've also paid testers with a $25 gift certificate. I emphasize
that their pay is not at all contingent on their performance and that we
know there are problems and are doing the testing in order to find them.
I don't agree that you need to jump through so many hoops to make people
feel comfortable. (aka "imagine you're a chef...") Make them feel
welcome. Let them know that its OK to fail. Give them short tasks. Quick
and dirty testing is good.
Anna said alcohol is good for testers? Hmm... that could produce
interesting results. "Here, before you test, drink this... it's
Tequila." Just kidding, I know that's not what she meant. I wonder how
that would skew the results though. :-)
I will admit that I like to maximize the value of my testers. One
application I tested was supported by a help desk so if the test failed,
instead of ending the testing the tester was allowed to call me and I
would help them with the particular task. That way they could move on to
the next problem.
--
Matthew Nuzum
newz2000 on freenode
More information about the ubuntu-doc
mailing list