With over 13 years experience working in UX, you’d think I know it all. We see similarities and patterns in the research we do for our clients all the time. Especially when they’re focused on a similar outcome - rebuilding a customer booking journey, for example.
Say I have a client wanting to improve sales conversion on their sofa website. My expertise tells me that we did something not dissimilar recently and probably the results could work here too. Even when the client, users and context remain the same, something new could always crop up in another round of testing. Despite this, I can probably predict at least some of the themes likely to emerge so it’s not unreasonable to wonder why you’re paying me to essentially do the same work twice.
The primary notion of usability is that an object designed with a generalised users' psychology and physiology in mind is, for example:
In our case, that ‘object’ is more than likely a website, an app or perhaps an online tool or digital product. But the principle is the same - we study people (through user testing and broader research) to help make something easier for them to use. Usability is broadly a set of rules derived from years and years of psychology research.
We have theories behind certain rules which can be used as working models to approach a problem but those models may not directly apply to what you’re doing each and every time because there are always variables to consider. So, going back to my first point, this is why we will always need to user test, however similar your project might look on paper to one which has gone before.
Right now there is something of a crisis happening in the world of psychology. Long-held theories are failing replication tests, forcing researchers to question the strength of their methods and the institutions that underlie them. That’s the beauty of science. As we learn more, we refine our thinking rather than blaming the person who originally conducted the experiment which contradicts our long-held beliefs.
In 2015, the Open Science Collaboration announced that over a 3-year study they conducted over 100 psychology studies and were only able to replicate results found in the original study 40% of the time.
With this in mind, it’s imperative that every project is user tested on its own merits because whilst I can make an educated guess at what might come up, based on my expertise, ultimately, my voice is just another one in a room full of people assuming they already know the outcome. User testing and design experiments provide empirical evidence which we can use to make decisions, somewhat removing the element of doubt. This stops great design ideas being watered down due to unfounded opinions based on loosely similar objectives. I have 13 years experience in user centred design but despite what I like to believe I’m not infallible, unlike this agency apparently!
Running user tests and design experiments enables us to innovate and deliver tangible benefits to our clients - higher profits, improved NPS scores and increased conversions. Designing with behavioural insights means freshness with every project. Without it, you’re in danger of churning out same dross as your competitors with a different logo plonked on top. And what’s the point of that? If you’re going to rip off your competitors at least user test them too!
Of course, just as every client differs, so do the users themselves and we cannot rely solely on assumptions collected over years of experience. The level of competence and their background can both have a profound impact on how they perceive your digital product. A perfect example here is the lazy stereotype of ‘old people won’t do things online’.
We all make them - me, you, your colleagues, your boss, your boss's boss and so on. Assumptions are dangerous because they make us lazy. Time is short and the deadline is looming - what do you do? Throw some educated guesses in a pile and hope for the best? Only to find out that they were wrong after you launch and the budget is spent? Nope. I advocate taking those assumptions and testing them now before you launch and before you’ve spent all of that budget on shiny animations and ill-conceived image carousels.
Despite the current crisis in the science of psychology, when it comes to user centred design, large swathes of us UXers have fallen into the same established design patterns, despite the fact that technology is still moving and changing very quickly. Voice search is tipped to explode with sales of smart speakers predicted to hit 56.3 million worldwide by the end of 2018. With little in the way of historic learnings to base your assumptions on, how do you know what will and won’t work here? The entire user journey differs from which has gone before, given that it’s entirely conversational.
The answer? You guessed it: user testing. Evidence over expertise every time (although that expertise is incredibly useful to have!). UX is king and the return on investment is clear to see both in our own case study results and in many many articles too. Not got time to read them all? Jason Amunwa sums everything up nicely in this infographic.