Usability Testing

Usability testing evaluates a product by testing it with real-life, representative users. Participants try to complete typical tasks while researchers, stakeholders, and development team members watch, listen, and take notes.

A usability test aims to collect qualitative and quantitative data and gauge your product's satisfaction. This data reveals areas that need clarification and uncovers opportunities to improve your product's user experience.

You can test a product at any stage of development. As long as tasks can be completed or imagined, it is possible to use:

  • Low-fidelity prototypes (paper or wireframes)
  • High-fidelity prototypes (InVision or Figma)
  • Alpha or Beta versions (working software)

To run a meaningful usability test, you must design a test, recruit participants, facilitate the test, and analyze your results. You do not need a formal testing lab or a lot of participants, and testing can be done in person or remotely.

Designing Usability Tests

Usability test tasks are realistic activities a participant would perform in real life. They can be specific or open-ended. The order of your tasks will follow a natural flow through your product. Prescribed tasks are pre-determined for the participant to complete. Participant-defined tasks ask participants to tell you something they usually do with your product and then have them do it. Open-ended tasks allow participants to explore your product freely.

A usability test will collect participants' demographic and product usage data before the test begins, regardless of the task type. Then prompt participants after each task for information such as ease of use. Limit the number of post-task questions so as not to overwhelm participants. After your usability test, a final survey can measure satisfaction.

Run your test with a pilot participant to ensure the tasks are possible and make sense.

Recruiting

Most product problems surface after 5-8 test participants. Your participants should adequately reflect your user base. You want participants with the right demographics and the right amount of experience with your product. A screener will help you filter out participants that don't match your target users. Ask questions that help narrow your pool.

Offer your participants compensation. Typically participants receive a gift card or gift certificate for their time, and even a small gift will help with recruitment.

Facilitating Usability Tests

A usability test facilitator guides the participant through the test. The facilitator gives instructions, answers the participant's questions, and asks follow-up questions. Facilitating usability tests well takes practice.

Task instructions can be verbal (the facilitator reads them) or written and handed to a participant. Most facilitators require a participant to read task instructions aloud to ensure that the participant understands. As the participant performs these tasks, a facilitator observes the participant's behavior and listens for feedback. The facilitator will ask follow-up questions to elicit more information from the participant. Good questions to ask during a usability test include:

  • What are you hoping to see?
  • What are you thinking?
  • What are you trying to do?
  • Where would you go?
  • What do you expect to happen?
  • What did you expect to happen?
  • Is there anything else you might try?

Facilitators should avoid the following:

  • Using terms used in the product.
  • Providing clues about what to do.
  • Offering approval with words, facial expressions, or body language.
  • Distracting the participant with keyboard clicks or background noise.

Having too many observers during a usability test can make some participants uncomfortable. Keep your audient small but take copious notes. Gather quotes, reactions, facial expressions, gestures, and behaviors. Leave timestamps in your notes to highlight them later as you review your recording.

It is tempting to alter a test after a few sessions. Please don't do this, as it will skew your analysis. Watching users struggle with your flow or product is a meaningful experience, and catching mistakes upfront allows you to correct them before a version or feature ships.

Analyzing Results

Stakeholders will be curious about the results of your usability test. Prepare your report and gather all stakeholders to review your findings. Start with the things that went well, then list problems, discussing each in detail.

Your report will answer these questions:

  • What worked well?
  • How did participants perform?
  • Were they able to complete tasks?
  • How long did it take to do so?
  • What mistakes occurred along the way?
  • How many problems did they encounter?
  • What did they say?

Your results could include the following:

  • Time on task - the time it takes a participant to complete a task.
  • Errors - the number of mistakes each participant makes while completing a task.
  • Satisfaction - the measure of a participant's feelings about your product before, during, and after the test.
  • Mouse clicks - the number of clicks a participant makes before completing a task.