5. Qualitative User Testing Methods
Understanding Qualitative User Testing
The most effective way of understanding which parts of an interface work well and which do not is observing some actual users using it to carry out some realistic tasks. The resulting qualitative insights into what sort of usability problems users face can be used in refining the interface. Qualitative user testing usually involves small samples of users (fewer users, thus more focus on the behaviors of each of them). The objective is to get a better understanding on why exactly certain problems are happening.
Moderated usability testing is a good method for identifying user experience problems and issues with task flows, as well as for understanding why the problems occur. The process is simple: a participant carries out some specific tasks, preferably while thinking aloud, a moderator observes the process and notes down all the problems, as well as all unexpected behavior.
Moderated user testing could be in-person and remote.
In-person moderated user testing is the preferred type of moderated user testing, since it allows reading participants’ body language (not everything is said in words, subtle cues, such as lifted eyebrows, communicate challenges in completing tasks as well) and recognizing an appropriate time for asking questions if needed. However, it is more expensive than remote testing and often is impossible when users are geographically dispersed.
Tools: In person qualitative moderated testing does not require any specific equipment apart from a computer (or a prototype). However, if your budget allows, you could also run the study with an eye tracker to generate heatmaps in order to see whether users have seen and paid attention to all important elements of the interface, e.g., each desired navigation component.
Remote moderated usability testing: A participant and a facilitator are in two different locations and connect through real-time two-way communications. It is great in situations when participants are hard to find, travel budgets are low and time frames are tight. However, the downsides of it are that the body language is not taken into account, it is harder to build rapport with participants, also, there might be some unexpected distractions – doorbells, children etc. Since non-verbal cues that participants are having difficulties are unavailable, follow-up questions and probing during testing are very important.
Tools: You could either set up a needed system by yourself or use one of all-in-one moderated testing services.
If doing it by yourself, you will need:
- A real time online connection tool. It could be Skype, Webex, GoToMeeting, JoinMe or any other, as long as it allows audio and visual communication between a participant and a facilitator.
- Tools for screen capturing and recording. There is a lot of software to choose from, for example Camtasia, SnagIt, Adobe Captivate; everything is suitable, as long as it records the audio and the video of a participant’s screen along with the voice of a moderator.
- Tools for video editing. If you need to communicate the findings to other team members, you will want to show them only the video material that shows users discovering usability issues, not full recordings of tests. You will need some tools for editing videos to highlight the parts that need attention, e.g. Camtasia, Movie Maker, iMovie.
There are some paid all-in-one testing services that could be used instead:
- Validately (https://validately.com): easy to schedule sessions, allows finding participants that match your user personas, observing and asking participants questions in a real time, creating flags during testing sessions to mark notable moments and jumping to the flags when watching the recordings later.
- UserTesting Pro Version (https://www.usertesting.com/plans) – an all-in-one solution that allows remote moderated testing either with your own participants or with UserTesting.com panel members. Allows highlighting important moments during a session.
These tools make testing easier by allowing you to utilize large panels of users, as well as by providing easy ways to handle payments of incentives and to schedule sessions, however, if your budget does not allow it, you could get the same quality of results by doing everything by yourself.
Basic moderated user testing script (in-person or remote):
- Define the goals of the study – whether it is testing the full interface or some particular aspects.
- Identify the tasks you would like the participants to perform, write them down in a form of short scenarios.
- Recruit some representative users.
- Conduct a pilot study (a session or two before the test) to find out if there are any problems with the clarity of tasks, if the time frames are realistic and if the equipment works as expected.
- Greet each participant, introduce them to the study and thinking aloud – explain them, that they need to express their thoughts verbally, if they have a question or do not understand something – they should say it, however, they will not get any answers before the session is over.
- Demonstrate thinking aloud. Since thinking aloud is not something that people do naturally, you should demonstrate it by firstly doing something yourself and thinking aloud, then asking participants to do a simple task, e.g. find something on Google while thinking aloud. Practice tasks need to be simple, so participants feel confident in their thinking aloud skills.
- Ask the participants to carry out the tasks created in Step 1 while thinking aloud and observe them. It is a good practice to record sessions.
- Discuss the findings with your team and consider how the interface could be changed to address the issues observed.
Number of participants: It is recommended to have 5 participants for qualitative testing, since it has the best return on investment. However, if there is more than one clearly different target group (e.g., teachers, students and parents), you will need 3-4 participants per group, depending on the overlap between the groups; essentially it is running a separate test for each audience.
Tasks: Rather than telling participants to do something without any context and explanation, it is better to provide a short scenario that sets the context and helps to fully understand the task. An example task scenario: “You are going to Florida for a week on the 20th of June. You would like to find the best deal for a hotel”.
Good task scenarios should:
- Be realistic: they should not ask participants to do what they would not normally do, otherwise they might complete the task without realistically engaging the interface. Too specific tasks (e.g. “Buy a size 12 red skirt of X brand”) are often unrealistic for many participants, since they might not search that way, they might browse by style etc. A better scenario would be “Buy a skirt for under $20”. The focus is not how successful in finding things they are, but how they use the interface to achieve their goals.
- Encourage action: participants should actually perform some tasks, not to tell how they would do it (e.g. instead of “Where would you click to find X”, it is better to say “Find X”) – the participants should not answer in words, you have to observe the whole process of doing a task.
- Should not have any hidden clues in task descriptions: for example, if a sign up button is labeled “Sign up for a newsletter”, ideally the task should not be “Sign up for the events’ newsletter”, but “Find a way to get email updates of upcoming events”. However, you should not be too vague, since that would make tasks hard to understand; all information that the participant needs to complete the tasks needs to be provided.
Poorly written tasks focus too much on making participants interact with a specific feature instead of observing how they choose to use the interface.
What to look at: Focus on the paths participants take to achieve their goals, what problems they have, the comments they make.
Talking to participants: Talk less, observe more, since they key is observation, not conversation. Before probing a participant further, always consider whether you have enough of information from just observing the participant, whether you would truly benefit from asking questions. Key moments appropriate for interrupting a participant are when the participant has offered some commends, asked questions or naturally interrupted his work.
It is important not to provide answers or cues how to do tasks, so if a participant is unsure how to do something, instead of giving some advice, you could say “What would you do if you were doing this at home on your own?”, always reflect the participant’s questions back. It is the hardest part of being a facilitator.
Thinking aloud: it is a good practice to encourage participants to think aloud during the process. This way you could discover misconceptions, misinterpretations, understand why users struggle with particular parts of the user interface. However, you should not push users too much to think aloud all the time in order not to make it too unnatural.
Heatmaps are used for tracking a user’s activity within a page or a screen, so you can see which elements they attempt to click, which areas they pay attention to and how far they scroll. It helps in understanding users’ behavior patterns and preferences.
There are two types of heatmaps: mouse tracking and eye tracking. Since eye tracking requires some specific equipment for tracking participants’ eye movements, mouse tracking is used the most often; an additional benefit of mouse tracking is that it allows getting data about the behavior of many actual users while they naturally use a website or an app (the software gathers data automatically every time the interface is being used), not a small sample invited to a lab for eye tracking. On the other hand, eye tracking is more accurate – 100% accuracy compared to 85-90% of mouse tracking.
Mouse tracking and eye tracking present all clicks and mouse or eye movements of users in a form of a heatmap, so you can see which areas get the most attention and which are ignored, which element distract users’ attention from key interface elements; from cursor’s movement you could deduce which areas users found the most interesting.
Most of mouse tracking software is very easy to set up and does not require any technical knowledge.
Mouse tracking software that could be used to generate heatmaps:
- Crazyegg http://www.crazyegg.com/
- Mouseflow https://mouseflow.com/
- Clicktale https://www.clicktale.com/ , more enterprise level.
- Luckyorange http://luckyorange.com/
There are some free alternatives: