Running a Usability Test
Run a Usability Test (Introduction) Usability tests are one of the most critical processes for testing, challenging or verifying your product. Usability tests are created to give you insights into what needs to change, what’s working and what to do more of, and they’re crucial for the growth stage of your startup.
Once you have planned your test and recruited your test participants, it’s time to get ready to conduct your analysis. To do so, you’ll want to think about which moderating technique is right for your test, set up your space and equipment, and make sure that you do a pilot test before testing with actual participants.
Choosing a Moderating Technique
In her Moderating Usability Tests article, Jen Romano Bergstrom notes that selecting the best moderating technique for your test depends on your session goals. Some standard moderating methods include:
Jennifer Romano Bergstrom (Author) is a UX Researcher at Facebook, where she works to understand the UX of Facebook in emerging markets. She has 12 years of experience planning, conducting, and managing user-centered research projects.
- Concurrent Think Aloud (CTA) is used to understand participants’ thoughts as they interact with a product by having them think aloud while they work. The goal is to encourage participants to keep a running stream of consciousness as they work.
- In Retrospective Think Aloud (RTA), the moderator asks participants to retrace their steps when the session is complete. Often participants watch a video replay of their actions, which may or may not contain eye-gaze patterns.
- Concurrent Probing (CP) requires that as participants work on tasks—when they say something interesting or do something unique, the researcher asks follow-up questions.
- Retrospective Probing (RP) requires waiting until the session is complete and then asking questions about the participant’s thoughts and actions. Researchers often use RP in conjunction with other methods—as the participant makes comments or actions, the researcher takes notes and follows up with additional questions at the end of the session.
Pilot testing is a small-scale trial, where a few examinees take the test and comment on the mechanics of the analysis. They point out any problems with the test instructions, instances where items are not clear, and formatting and other typographical errors or issues.
Before conducting a usability test, make sure you have all of your materials, consents and documentation prepared and checked. It is essential to pilot test equipment and materials with a volunteer participant. Run the pilot test 1-2 days before the first test session so that you have time to deal with any technical issues, or change the scenarios or other materials if necessary. The pilot test allows you to:
- Test the equipment
- Provides practice for the facilitator and note-takers
- Get a good sense whether your questions and scenarios are clear to the participant
- Make any last-minute adjustments
- Treat participants with respect and make them feel comfortable.
- Remember that you are testing the site, not the users. Help them understand that they are helping us test the prototype or Web site.
- Remain neutral – you are there to listen and watch. If the participant asks a question, reply with “What do you think?” or “I am interested in what you would do.”
- Do not jump in and help participants immediately and do not lead the participant. If the participant gives up and asks for help, you must decide whether to end the scenario, give a hint, or provide more substantial support.
- The team should decide how much of a hint you will give and how long you will allow the participants to work on a scenario when they are going down an unproductive path.
- Take good notes. Note-takers should capture what the participant did in as much detail as possible as well as what they say (in their words). The better the notes are that are taken during the session, the easier the analysis will be.
- Measure both performance and subjective (preference) metrics. People’s performance and choice do not always match. Often users will perform poorly, but their ratings are very high. Conversely, they may perform well, but subjective ratings are deficient.
- Performance measures include: success, time, errors, etc.
- Subjective measures include the user’s self-reported satisfaction and comfort ratings.
Example Usability Test Session
Example Usability Test Session The facilitator explains where to start. The participant reads the task scenario aloud and begins working on the scene while they think aloud. The note-takers take notes of the participant’s behaviours, comments, errors and completion (success or failure) on each task.
Here is an example test session.
- The facilitator will welcome the participant and explain the test session, ask the participant to sign the release form and ask any pre-test or demographic questions.
- The facilitator explains thinking aloud and asks if the participant has any additional questions. The facilitator describes where to start.
- The participant reads the task scenario aloud and begins working on the scene while they think aloud.
- The note-takers take notes of the participant’s behaviours, comments, errors and completion (success or failure) on each task.
- The session continues until all task scenarios are completed or time allotted has elapsed.
- The facilitator either asks the end-of-session subjective questions or sends them to an online survey, thanks the participant, gives the participant the agreed-on incentive and escorts them from the testing environment.
The facilitator then resets the materials and equipment, speaks briefly with the observers and waits for the next participant to arrive.