A Comprehensive Guide To User Testing

About The Author

A designer, writer and speaker based in Belfast, Christopher has founded a number of successful digital startups. As a User Experience (UX) consultant he has … More about Christopher ↬

Email Newsletter

Weekly tips on front-end & UX.
Trusted by 200,000+ folks.

So you think you’ve designed something that’s perfect, but your test tells you otherwise. In this article, Christopher Murphy will be focusing on usability testing, where he evaluates the design decisions made against a representative set of users to test if his assumptions are correct. User testing should be happening at every point in the process as an integral part of an iterative design process. With that thought in mind, it’s important to establish a structured framework for user testing throughout the design process. let’s explore the importance of user testing.

(This is a sponsored article.) With a prototype of your design built, it’s important to start testing it to see if the assumptions you have made are correct. In this article, the seventh in my ongoing series exploring the user experience design process, I’ll explore the importance of user testing.

As I explored in my earlier article on research, where I explored the research landscape, there are many different types of research methods you can use, and there are a variety of different user tests you can run, including:

  • Usability Testing
  • Eye Tracking
  • Interviews and Focus Groups

In this article, I’ll be focusing on usability testing, where we evaluate the design decisions we have made against a representative set of users to test if our assumptions are correct.

With your prototype in hand, you might be all set to go with the final build of your website or application, but it’s important to pause and undertake some testing at this stage in the process. Getting some typical users in front of your design is critical so you can get a feel for what works and — equally — what doesn’t.

Similarly, when you’ve undertaken your final build, you might be forgiven for thinking that everything is finished. In fact, once you’ve launched, you’re only at the start of the journey. Ideally, you’ll undertake some further testing and, with your findings from that testing in hand, revisit your design and address any issues you might have discovered.

Remember: Design is an iterative process. There are always improvements to be made, informed by your testing.

In short: User testing should be happening at every point in the process as an integral part of an iterative design process. With that thought in mind, it’s important to establish a structured framework for user testing throughout the design process:

  • Before you undertake your initial design, perhaps using paper prototypes;
  • During the digital prototyping phase, using lo-fi and hi-fi clickable prototypes; and
  • At the end of the process, helping you to refine what you’ve built.

You might not have the budget to run fully-fledged usability tests, and for many projects that’s understandable, but that doesn’t mean you shouldn’t at the very least test your designs informally. Guerilla testing — ad-hoc testing with passers-by, run in an informal manner — is better than no testing.

The bottom line? Any testing you can do — no matter how informal — will serve you well. With the importance of usability testing underlined, let’s explore the why and when of testing, stress the importance of good preparation, and dive into running usability tests effectively.

Usability Testing: Why And When?

First things first, to run an effective usability test you don’t need formal ‘laboratory conditions.’ It’s far better to run some usability testing using what you have to hand than to run no usability testing at all.

You might be wondering, why bother? Usability testing takes time and — when you’re under pressure and with deadlines looming — you might be tempted to forego it. Don’t make this mistake: it will cost you more in the long run. Usability testing will, of course, require a degree of investment in time and money, but it will more than pay off.

Your goal is to gather as much feedback as you possibly can as early as you possibly can. This helps you to identify any design issues before you get to the expensive part of the process when you reach the final build. It’s too late — and too expensive — to leave user testing until after you’ve built your product. At that point in the process, changes are incredibly costly.

As I noted in my previous article on wireframing and prototyping, the earlier you identify problems, the less expensive they are to fix. Running a usability test will, amongst other things, help you to:

  • Identify if users are able to complete specific tasks successfully;
  • Establish how efficiently users can undertake predetermined tasks; and
  • Pinpoint changes to the design that might need to be made to address any shortcomings to improve performance.

In addition to these objective findings:

  • Does the product work effectively?
  • Running a usability test helps you to make subjective findings: Do users enjoy using the product?

These objective and subjective findings provide valuable feedback that help you to shape and improve your design.

With the benefits of running usability tests clearly established, let’s explore when in the design process you should run your tests. There are a number of points in the design process that you might run usability testing. You might be testing an existing product or a competitors’; this will depend on your project and its circumstances. You might, for example, be:

  • Testing an existing product you’ve created that you plan to redesign;
  • Testing competitors’ products so that you can learn from them if you’re moving into a space where there is already an existing, competing product; or
  • Testing a product you’re currently in the process of working on.

It’s important to allow for more than one series of usability tests. Ideally, you’ll be testing at multiple points in the process: at a midway point with some clickable prototypes; and once your final build is done and you have a fully built product.

usability testing should happen throughout the design process
Ideally, usability testing should happen throughout the design process. At the prototyping stage, it can help identify issues that would be expensive to fix later in the process. At the final build stage it can equally identify issues that might have been hard to replicate at the prototyping stage.

Each of these points in the process offers you something different to learn and helps you address shortcomings in your assumptions before your final build. The rule of thumb is: The earlier you run a usability test, the better.

Like anything, the better prepared you are, the more effectively your usability testing is likely to be, so let’s explore the importance of preparation.

Preparation Is Key

To run a usability test effectively will take between 30–60 minutes per participant. Of course, depending upon the complexity of what you’re building, this length of time will vary, but in my experience, an hour is about the maximum time I’d recommend.

The longer a usability test runs, the more tiring it is for the participant, leading to diminishing returns. As such, preparation is key. It’s important to establish upfront exactly what you hope to learn from the test and, equally importantly establish who you’ll be testing. To do this, it helps to:

  • Develop a solid test plan that outlines your usability test, ensuring that when you test across different individuals, you’re doing so in a consistent manner; and
  • Establish clear criteria for recruiting participants, so that you’re testing users who are appropriate to what you’re designing.

The preparation you put in place before the test will pay off in terms of efficiency and improved findings. Bear in mind that running a test will require a number of individuals:

  • The test participants;
  • A facilitator, guiding the test and ensuring everything runs smoothly; and
  • Some observers and note takers.

Time is money, and with so many people involved in the process, it’s important to ensure that the time you’re investing pays off. To remain focused it’s important to establish a plan for your usability test and prepare a script that ensures everything is consistent.

Establishing A Plan

Your plan serves to establish the following: what you plan to test; how you plan to run the test; how you will measure what you’ll capture and what metrics you’ll use; the number of users you’ll test; and what scenarios you’ll use as the backbone for the test.

Think about the scenarios that you are trying to test. What is your website or application’s purpose? What’s its primary goal? It’s important to establish a plan around this, including the following elements.

Where And When?

Where and when will you be running the test? Unless you’re working for a large organization, it’s unlikely you’ll have a dedicated space for usability testing. That’s fine; the important thing is you’re running some usability tests!

Try to find a quiet space that you can welcome your test participants and make them feel at ease. Also allow for space for a facilitator, who will run the test, and some observers, who will take notes. It helps to group your usability tests so that you can cross-reference your findings across users while everything’s fresh in your mind.

Scope

Establishing the scope of your usability test ensures your goals are realistic. You might be designing a website or a product that is large in scope, but when defining the scope for your usability testing, be realistic. You only have so much time, so focus on the important aspects that you need to address.

Specify what you’ll be testing, for example, your website or application’s navigation system, or its e-commerce flow. This will keep you focused and ensure you don’t drift off-topic.

If you’re dealing with a complex piece of design with multiple moving parts, you might want to run a series of different usability tests; each focused on a particular aspect.

Timings

Different usability tests will require different timings, but as a rule of thumb, it helps to allocate around 30–60 minutes per participant. Moving beyond an hour can result in participants getting tired, which — in my experience — leads to a drop-off in the quality of feedback.

When scheduling your usability tests ensure you allow for sufficient time between tests. It’s important to allow for discussion amongst the team immediately following a test, while the test is fresh in everyone’s mind. Equally, a buffer between tests is helpful in case a test runs over or a participant arrives late.

Equipment

Again, the equipment you use doesn’t have to be overly complicated. It’s important to be able to capture the session in some form, ideally using video. Above all, it helps to be able to capture what users’ say and their expressions. As I’ll explore in a moment, you can tell a lot about a design by looking at a test participant’s reactions. Their facial expressions and body language will often tell you as much, if not more, than what they actually say.

We’re fortunate now to have low-cost screen recording software at our disposal. Screen recording tools like Screenflow are very cost effective and, using your computer’s built-in webcam, enabling you to capture not only what the user is doing on screen, but also the look on their face.

With your plan clearly established, it’s time to develop a script.

Creating A Script

Building on your plan, your script will help you to facilitate the usability test clearly and consistently. Creating a script helps you to:

  • Focus your mind on what exactly you’re testing, so that your usability test doesn’t drift and remains focused;
  • Ensure consistency across multiple test participants;
  • Talk about different user scenarios;
  • Clearly, articulate the different goals you’re testing; and
  • Help you put your users’ minds at ease.

It helps to break the script down into a couple of sections: a section that acts as a preamble, and a section that covers the main body of the test itself.

Your preamble is designed to settle the user before the test starts. In it, you cover what you’re testing and why you’re testing it. Above all, it’s important to ensure that you put the users’ mind at rest, assuring them that you are not testing them, you are testing the product.

Participants are human beings, and it’s only natural that they will apologize if and when things go wrong. You need to put their minds at ease and assure them that nothing they do or say is wrong.

Your script is designed to focus your test on the scenarios you have established in your plan. When establishing your scenarios, bear in mind that — depending upon the complexity of the website or product you’re building — you will only have a limited amount of time to test everything, so be realistic.

It helps to establish a story around which you build your series of tasks. For example, if you’re testing a travel-related website or application, consider:

  • How many people are making the journey?
  • When are they traveling and do they have flexibility with their dates?
  • What kind of budget do they have?

Of course, every scenario will be different. The bottom line is to spend some time defining what exactly you plan to test so that your testing is focused on testing the right thing and returns valuable results. If you’ve spent some time building user personas for your project, you might like to build scenarios around these.

In short, try and create as realistic a scenario as possible. As websites and applications become increasingly complex, it helps to test user journeys through your interface. This also helps to tie your testing back to the user stories you identified earlier in the design process, which I explored in my previous article on high-level UX design.

Recruiting Participants

With your plan ready and your script written you need to identify some participants to undertake your usability test. As with your plan and your script, preparation is key; it’s important to put some thought into identifying the right participants.

There’s no point setting aside a considerable amount of time to undertake usability testing and then testing random strangers. Spend some time to identify and find the right kind of people for your test.

First things first, it’s important to test more than one person. Everyone is different, and everyone draws from different experience, so ensure that you’re testing a variety of different people, so your results aren’t skewed by too small a sample size. What you’re designing will affect who you choose, different websites and products attract different audiences, so plan accordingly.

Usability.gov
Usability.gov has an excellent Usability Test Screener that is a great starting point for building your own screener.

It helps to establish a profile and create a screener to help you identify candidates so that you recruit participants who accurately represent your potential users. It’s important that your participants share the characteristics of your typical customers; again, user personas will be helpful in identifying these characteristics.

Imagine you’re building a mobile application for a new digital challenger bank aimed at a younger demographic, your screener might include the following:

  • What gender do you identify as?
  • What age are you?
  • Which bracket does your income fall into?
  • Are you a saver or a spender?
  • Does your current bank have an app and do you use it?

It’s important to ensure that your questions are inclusive. Equally, exercise some discretion when asking sensitive personal questions. For example, when asking for income — if it is relevant to your test — provide brackets for income so that you respect your applicants’ confidentiality.

If you’re looking for a good starting point, Usability.gov have an excellent example of a Usability Test Screener for website testing. This is a useful starting point and will give you some ideas.

With a plan created, your script in hand and some participants lined up it’s time to run your test, so let’s explore that now.

Running The Test

Before you begin your usability testing it’s important to get everything organized and in place. It helps to have a record of each usability test you run, so that you can look back through it later and undertake some analysis. This might be through screen-recording software alone, or it might include recording a video of the test.

If you’re recording the test, it’s important to ask your participants for permission. Equally, this gives you an opportunity to explain why you’re recording the session and what you’ll be using the recordings for.

Remember, your goal is to put your participants’ minds at ease, and explaining what’s what helps to do this before you start the test.

Before The Test

Rather than diving straight into your testing scenarios, it helps to run through a short preamble, explaining what it exactly as you’re trying to achieve through your testing. This ensures you give your participant a clear idea of what you’re expecting of them, helps to take the pressure off them, and eases them into the test.

It’s helpful to outline who’s in the room and why, explaining that while you’re running the test, the others present will be recording their observations. It also helps to give the participant an idea of how long the test will last and, broadly, what you’ll be covering.

Before you begin, it’s important to inform the participant that you’re not testing them, you’re testing the software and that there are no wrong answers. The participant needs to know that your intention is to watch them using what you’ve built and, to ensure the conditions are as real as possible, you won’t be offering them advice.

This last point is critical, especially of you’re running a usability test on something you’ve designed. It’s important not to interrupt the test participant’s flow by offering them guidance and advice. You won’t be there to do this in the ‘real world,’ so resist the urge to offer advice from the sidelines.

Even if what you’re testing was designed by you, it’s important that you don’t tell your test participant. (A little white lie won’t hurt!) You’re dealing with humans, and if your test participants know that they’re giving you feedback on something you designed, they are likely — only naturally — to hold back on their criticisms. No one likes to hurt someone’s feelings, and it’s important you get an honest opinion, so don’t skew the participants’ answers by telling them they are judging something you designed.

Ask your test participants to try, if possible, to verbalize what they think will happen as they work their way through the scenarios. This helps you to get a feel for what they’re thinking. It also helps your observers and notetakers if you ask your participant to run through the tasks you’ve set them a little more slowly than they might if this wasn’t a test.

As a facilitator — from time to time and being careful not to interrupt the flow — you might like to ask the participant what they think might happen next before they undertake a particular action. This helps you measure your user’s mental model of what’s happening to see if it aligns with the mental model of the design.

Lastly, stress that if something goes wrong during the test, it’s the software’s fault, not the user’s. It’s important that your test participants don’t think something is ‘their fault’ when it’s an issue with your design.

During The Test

With your preamble done, it’s time to get the test underway. As you walk your participants through your script setting them various tasks, it’s important to resist the urge to lead them. Your goal is to see how they react to the tasks, helping them out with advice defeats that purpose.

This can be incredibly difficult, especially if you’re testing something you’ve designed, but you need to do your absolute best not to try and help out. It can be frustrating watching someone struggle to understand how to use something you’ve built — that you think makes absolute sense — but remember, what you’re discovering is helping you.

Equally, if you’re responsible for the design try not to let your facial expressions give the game away. This takes practice (especially for me!), but it’s important that you try and remain as neutral as possible.

When running your usability test, you’re learning on two levels, by:

  1. Listening to what people say; and
  2. Observing what people do.

Listening and observing are both important and will provide you with different insights. Listening will give you subjective feedback on your design: “I like this because…,” “I like this kind of feature…,” “I prefer it when….” While subjective, and dependent upon your test participants’ opinions, this kind of feedback is useful, because it can surface ways of doing things that you may not have considered.

Observing how your test participants use your website or application is a great way of seeing what works and what doesn’t. Again, you’re testing your assumptions: you think you’ve designed something that’s perfect, but sometimes your test tells you otherwise.

It’s important to be aware of the distinction between listening to what people say and observing what they do. You are dealing with human beings when you’re running a test and humans like to consider others’ opinions.

You might run into situations where someone you’re testing is complimentary about a particular design or feature (“I like this.”), yet their actions tell a different story (you watch them desperately trying to complete a task you’ve set them!).

Don’t underestimate the power of observation. As Yogi Berra famously put it:

"You can observe a lot just by watching."

This is why, when you run a test it’s important to have more than one person involved. You’ll need a facilitator to take the participant through the usability test, using the script as a guide; and one or more observers to capture the participants’ reactions.

In Closing

Testing — and particularly usability testing — is a critical part of the design process. Run them well; an effective usability test will save you money in the long run. Running usability tests, ideally at multiple points in the design process, helps to keep your users positioned front and center, which — as user experience designers — is our goal.

Your tests don’t need to be run in fully fledged laboratory conditions; the important thing is you’re undertaking testing. If you’re on a shoestring budget, some guerilla testing is better than no testing; just ensure that you’re testing on the right kind of person.

Remember: Who you test is important. There’s no point putting in the hard work to build a detailed usability testing plan if you’re testing the wrong people.

Lastly, as I noted in my article on user research it’s important to spend some time analyzing your research findings. When you’ve completed a number of usability tests with different participants, sit down with your team, cross-reference everything, and look for patterns of behavior.

Identifying pain points — points in the process where your participants ran into difficulty — means you can fix those pain points. Equally, identifying moments of delight can help you identify what you might want to do more of. Taken together, this feedback — once you’ve applied it — will lead to a better experience all-round.

Suggested Reading

There are many great publications, offline and online, that will help you on your adventure. I’ve included a few below to start you on your journey.

This article is part of the UX design series sponsored by Adobe. Adobe XD is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype, and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

Further Reading

Smashing Editorial (ra, il, mrn)