Following our last blog post ranting about a dodgy infographic doing the rounds which gets the concept of user testing completely wrong, we wanted to give a quick crash course for the uninitiated on what types of user research you can use for different tasks, and how many people you ideally need to make the process worthwhile. The type of user research you choose to do depends on what questions you want answers to. Here, we've picked a few common ones and the kind of user testing we’d recommend for each scenario (aka the right tools for the jobs).
Usability testing assesses how people interact with an interface or product, whether it’s yours or a competitor’s. When it’s done right, it becomes central to the design process, ensuring you make the right thing from the start. This kind of testing is a qualitative research method, designed to uncover insights, opinions and motivations, so statistical significance is not a consideration when you’re deciding how many participants you need for your test. In fact, the numbers required can be quite low. We recommend testing responsive websites with five participants per major "breakpoint" (on a responsive website a breakpoint is a point where a design changes shape or size, triggered by screen size and orientation). Academic research indicates that five participants may expose 85% of usability issues. The second round of testing with the same number should expose an extra 13% and a third round can expose the remaining 2% of issues. These numbers may also depend on the type of interface being tested; for example, on a responsive website, many issues are specific to the size of the device.
Card sorting is a process that gets users to sort information in a way that makes sense to them, helping you figure out how to structure your content so your customers can find things quickly.
Tree testing is the best approach when you already have a content structure and want to know if it makes sense to everyone else. Testers are asked to find information on a simplified version of your site. Card sorting and tree testing are quantitative research methods, relying on numerical data to deliver meaningful results. This means sample sizes become more significant and the more respondents you have within your target audience, the more confident you can be in the results. The exact amount we recommend depends on the complexity of the structure and potential for ambiguous results.
Straightforward data analysis using tools like Google Analytics combined with session recall tools like Hotjar and Jaco can show you what users are really doing on your site, throwing up issues and successes.
Split testing takes two variations of a product or interface, puts them in front of two groups of real customers and tells you with quantitative data if one performs better than the other.
Ethnographic research (watching people do things) helps you figure out how your product would fit into the everyday life of your target market by showing you how that target market acts and thinks.
Depth Interviews and Co-Design workshops can help you understand your users’ attitudes to a problem, getting inside the heads of the people who are key to your product’s success. The ideal number of participants for ethnographic research and user interviews really depends on the specific problem space you’re investigating, but typically we recommend speaking to around 10-12 people who you think may have the problem that you’re trying to solve. This type of research should happen at the start of a development process when you’re exploring a wide problem space and trying to understand the impact, so the key is not to jump on what one or two people say but to look for patterns and themes which help you come closer to identifying what to focus upon. You could organise the second round to explore those themes in more detail. It’s also important to structure interviews in the right way to get the best results. While it might seem like a good idea to hit the streets with a clipboard, in reality, this is unlikely to deliver useful information. Random people having a clipboard thrust at them are likely to be unwilling to talk to a stranger, especially if you’re asking them about valuable stuff on their person like their mobile or laptop! They’re also possibly not the exact demographic you need and are unlikely to be willing to stop for long enough to really explore your problem space and validate assumptions. And a few scribbled notes on a clipboard lose all nuance when they’re conveyed to a product team later. You need to be able to see and hear from the people we interview in order to understand what they say.