How To Run User Tests At A Conference

About The Author

Daniel Sauble is a Senior UX developer at Sonatype, where he leads design for Nexus, a software repository manager. In his spare time, he speaks at conferences, … More about Daniel ↬

Email Newsletter

Weekly tips on front-end & UX.
Trusted by 200,000+ folks.

For many of us, the idea of doing formal user testing, is a formidable challenge. There are many reasons why: you don’t have enough lead time; you can’t find enough participants, or the right type of participant; you can’t convince your boss to spend the money. User testing is the best way to improve your designs. If you rely on anecdotal data, or your own experience, you can’t design a great solution to your user’s problems. User testing is vital. But how do you make the case for it and actually do it?

User testing is hard. In the world of agile software development, there’s a constant pressure to iterate, iterate, iterate. It’s difficult enough to find time to design, let alone get regular feedback from real users.

For many of us, the idea of doing formal user testing, is a formidable challenge. There are many reasons why: you don’t have enough lead time; you can’t find enough participants, or the right type of participant; you can’t convince your boss to spend the money.

In spite of this, user testing is the best way to improve your designs. If you rely on anecdotal data, or your own experience, you can’t design a great solution to your user’s problems. User testing is vital. But how do you make the case for it and actually do it?

What Is User Testing?

Let me start by defining what user testing is, and what it is not.

User Testing *Is*…

  • Formal. Your goal is to get qualitative feedback on a single design iteration from multiple participants. By keeping the sessions identical (or as similar to one another as possible), you’ll be able to suss out the commonalities between them.
  • Observational. Users don’t know what they need. Asking them what they want is rarely a winning strategy. Instead, you’re better off being a silent observer. Give them an interactive design and watch them perform real tasks with it.
  • Experimental. At the core of any user study is a small set of three to five design hypotheses. The goal of your study is to validate or invalidate those hypotheses. The next iteration of the design will change accordingly.

User Testing Is *Not*…

  • Ad-hoc. Don’t accept what a single person says at face value. Until you get signal from several people that a design is flawed, withhold judgment. Once five or six participants have given consistent feedback, change the design.
  • Interrogative. Interviews are useful for learning about users, their roles, and their experiences. But keep it brief. Interviews tend to put the focus on what people say they do, not what they actually do.
  • Quantitative. Because the sample size is small, you can’t make strong statistical extrapolations based on numbers alone. If you care about numbers, look into surveys, telemetry, and self-guided usability tests instead.

What Is A User Study?

A user study is a research project. It starts with a small set of design questions. You take those questions, reformulate them as hypotheses, devise a plan for validating the hypotheses, and conduct five or six user tests. Once done, you summarize the results and decide on next steps. If the findings were clear, you might make improvements to the design. If the findings were unclear, you might conduct an additional study.

apple-mouse-evolution-opt
You won’t get it right the first time. Test your design, iterate, and repeat.” (Image credit)

A Good User Study Has Clear And Measurable Outcomes

If you have clear expectations, it will be much easier to take action on what you learn. This is often accomplished with hypotheses: testable statements you assume to be true for the purposes of validation. Examples of good hypotheses include:

  • Users can add an item to their shopping cart and check out within five minutes.
  • Users want to click on server-related error messages to see additional details.
  • Users are not frustrated by the lack of a dashboard in the product.

A Good User Study Is Easy To Facilitate

This is especially important if you are not the facilitator. If the facilitator is inexperienced with user testing, you’ll need to provide a test script which is easy to understand, keeps the test on track, and explains what you are trying to learn from the test.

A Good User Study Must Be Sufficiently Detailed And Interactive

If you want to measure a user’s reaction to an on-screen animation, you probably need a coded prototype. If you need to decide whether a particular screen can be omitted from the final design, a set of PSD mockups will do. Needless to say, this is a lot of moving pieces. Effective user studies are rigorous, and rather expensive to pull off as a result. If you cut corners, you may second-guess your results and need to run another study to be sure.

Self-Evaluation

That’s what user testing is. Now, ask yourself the following questions:

  • Do you conduct user tests?
  • Are they a regular part of your practice?
  • Would you like to do more of them?
  • What’s keeping you from doing more of them?

I ask these questions often. It’s amazing how few of us do user testing with any consistency, myself included. Everyone wishes they did more of it. That’s both a problem and an opportunity.

User Testing In An Agile World

The agile mantra is “fail fast, fail early”. The faster you fail, the faster you’ll converge on the right solution. This equates to a lot of tight iterations. Agile teams traditionally have two-week sprints, with the goal of releasing a running (read: testable) build at the end of each sprint.

Great, right? The problem is that this leaves very little time to validate a design, summarize the results, and do just-in-time design for the next iteration. Recruiting can take a week in itself, to say nothing of the testing.

And that’s not tenable. At most, you’ll have a few days to get some actionable insights before the next iteration starts. How might we solve this problem?

Let’s make a few assumptions:

  • Five iterations from the start to the end of the design process.
  • Five participants in each user test (25 participants for all iterations).
  • Four designs in flight simultaneously (five iterations each, 100 participants in total).

One way to solve the problem of getting out in front is to validate multiple iterations before any software is built. Not every design needs a live-code prototype to validate it. Sometimes, a clickable Balsamiq PDF is enough. Now, we’ve shifted the problem. The number of design iterations (and the number of test participants) is the same as before, but you can get a lot further before engineering starts building anything. You just need a lot of participants, fast.

User Testing At Conferences

Unless you’re lucky enough to design a product that millions of people use, recruiting can be a challenge. Since I design software for system administrators, the best place to get qualitative feedback in a matter of days is at an IT conference.

The basic steps are:

  1. Pick a conference
  2. Write some studies
  3. Set up your booth
  4. Analyze the results (in real time)
  5. Iterate on the design
  6. Rinse and repeat

Obviously, you’ll need help, so bring some volunteers with you. Also, don’t expect to nail this the first time you try it. Give yourself a chance to make mistakes and learn from them.

Conferences: the best place to conduct a lot of user tests in a very short amount of time.
Conferences: the best place to conduct a lot of user tests in a very short amount of time. (Image credit) (View large version)

The number of times you can iterate depends on what you’re learning. If you’re learning a lot, keep going. If you’re running into tool limitations, it might be time to stop and have your development team build you a live-code prototype.

Bonus: if you have software development skills, you might be able to build a prototype yourself. Better yet, bring some developers with you.

"Disclaimer: I've done conference-based user testing twice, and haven't entirely nailed these steps (even though we've made great strides in the right direction). It might take a few tries to get it right."

Attempt #1: PuppetConf 2012

Once a year, Puppet Labs hosts PuppetConf, a tech conference for IT professionals. In 2012, it was held at the Mission Bay Conference Center in San Francisco and 750 people attended.

Two of us prepared five studies and set up three user testing stations in a high-traffic hallway. Each user testing station consisted of a laptop, a stack of test scripts and NDAs, and a volunteer to help facilitate the tests. We had about 16 volunteers, and ran 50 user tests.

Mission Bay Conference Center at UCSF, the site of our 2012 user testing.
Mission Bay Conference Center at UCSF, the site of our 2012 user testing. (Image credit) (View large version)

This was a great experience, but we didn’t get much actionable research out of it. Our focus was on data gathering. We didn’t bother to analyze that data until weeks after the conference, which meant it had gathered dust. In addition, the things we tested weren’t on our product roadmap, so the research wasn’t timely anyway.

Attempt #2: PuppetConf 2013

In 2013, we repeated our user testing experiment. That year, it was held in the Fairmont San Francisco hotel and 1,200 people attended.

Five of us prepared six studies and set up three user testing stations in a room adjacent to a high-traffic hallway. We added dedicated lapel mics and three-ring binders to keep our scripts organized. With the same number of volunteers (16), we ran almost twice as many user tests (95).

This year was vastly more successful than the previous year. We pulled analysis into the event itself, so we got actionable data more quickly than before.

Fairmont San Francisco, the site of our 2013 user testing.
Fairmont San Francisco, the site of our 2013 user testing. (Image credit) (View large version)

Unfortunately, we didn’t go the extra step of iterating on what we learned during the conference. Our product wasn’t affected until months later. It was a step in the right direction, but too slow to be considered agile.

What Did We Learn?

In 2012, we made a large number of mistakes, but we learned from those mistakes, improved our tests and testing process, and doubled both the quality and quantity of the tests in 2013. So, don’t be afraid of failing. A poor user testing experience will only help you learn and improve for next time.

Here are some of my observations from those experiences.

Conferences Let You Cut The Fat Out Of Recruiting

Recruiting is very time-consuming. We have a full-time position on our research team at Puppet for that very purpose. But at conferences, people are already present and willing to engage with you. All you need to do is show up.

In a typical user study, we send out a screener email to 50–100 people in our testing pool. A lot of people won’t respond, and of those who do, only some will meet the requirements for the test. It takes time to get enough valid responses, and sometimes we have to widen the net, which takes more time.

Conferences Let You Validate Your Entire Roadmap

In both years we had more interest in testing than we could facilitate. In 2013, the 95 participants who tested with us were far more than we needed.

If you decide to conduct self-guided, quantitative usability tests, you can run even more tests. In 2014, our research team had over 200 people take a single usability test.

Conferences Are Chaotic, But Process Can Help

In 2012, we had a simple four-stage process: greet, recruit, test, and swag.

  1. Greet Every time someone came to our booth, we had a greeter volunteer who said Hi and told them what we were doing.
  2. Recruit Next, we asked if they wanted to join our Puppet Test Pilot pool for testing opportunities throughout the year. If so, we scanned their badge.
  3. Test If we had a test station available, we asked if they wanted to take a 15–20 minute user test. If so, the greeter introduced the participant to a facilitator at one of the stations.
  4. Swag At the end of the testing, we thanked each participant, and gave them a limited edition T-shirt and a signed copy of Pro Puppet.

This process worked well, but there were a couple of obvious holes. First, we didn’t have a good screening process, so there was no guarantee that a participant was a good match for the tests. Second, we didn’t have a plan to quickly learn from the tests and act accordingly (see: agile).

To correct these shortcomings, we introduced two additional steps in our 2013 testing process: greet, recruit, screen, test, swag and analyze.

  • Screen. At the beginning of the testing process, the facilitator asked the participant six questions, one for each user test. If the answer was yes, we knew they’d be a good match for the test.
  • Analyze. At the end of the testing process, the facilitator filled out a short form. Each user test was allocated a text field, with the study hypotheses alongside. The facilitator entered their notes, and marked the validity of each hypothesis.

Conferences Allow Your Competitors To Snoop

We used NDAs to counteract this. As an unintended side-effect, they made the testing seem more exclusive and special, so participants were eager to sign them.

In 2013, we switched from paper to digital forms, via DocuSign. From a logistical standpoint, this was a great move. We didn’t have to keep track of loose stacks of paper after the conference. On the other hand, the signing workflow was rather cumbersome. People had to sign their initials three times and click multiple times to complete the NDA.

Conferences Are A Great Way To Build User Empathy

Ultimately, user testing is about people, not testing. Both years, we recruited volunteers from non-UX departments within the company: engineering, product, marketing, and sales. It was great to give these people an opportunity to engage with our users over real problems.

And it goes both ways. People love to talk about their job, their pain points, and how your product or service falls short of easing that pain. No, anecdotal data isn’t terribly useful in a design context, but it can help you build a mental model of real-world problems.

Conferences + User Testing Is A Scary Combination

As I mentioned, we recruited volunteers from non-UX teams. Many of those volunteers had never conducted user tests before. It was a nerve-wracking experience for many of them.

In 2013, we instituted a training process to get our volunteers up to speed more quickly. To do this, we instituted a series of training meetings.

In the first meeting, we got everyone in the same room and talked through the testing process and the tests at a high level. Next, we broke up into small groups of two or three people apiece. In these groups, we had volunteers practice facilitating the tests with each other. The test author attended these as well, to spot areas in need of improvement or clarification.

If our volunteers were still nervous about the prospect of user testing, we met with them personally. In some cases, we convinced them to push forward and run user tests anyway. In other cases, we moved them to a less demanding role, usually the role of a greeter.

Conferences Are A Black Hole For Data

In the first year, one of our three test laptops was mysteriously wiped of data. The second year, two of our laptops were stolen. We lost all of the test recordings on those machines.

The silver lining was the post-test analysis we did in 2013. Because our facilitators took such rigorous notes, and saved those notes to the cloud, we retained the data, even though the actual recordings were lost.

Process Is King, But Organization Is Queen

Keeping things digital as much as possible helps. If you must use paper, don’t use manilla folders. Instead, use three-ring binders with labels to keep your papers collated.

On the digital side of things, consider having a single folder where all conference-related documents and data live. Use tools like DropBox or Box to keep everything synchronized across machines. Having local copies is critical, in case the network goes down, which it probably will.

Use Retrospectives To Learn And Improve

After the conference, hold a meeting with the core testing team. For the first five or ten minutes, write ideas on sticky notes. These ideas should take the form of things to stop doing, keep doing, or try doing. Put these stickies on a whiteboard, under the appropriate column (keep, stop, or try).

Once everyone runs out of ideas, pick a volunteer. This person groups the stickies by theme (e.g. “communication”, “process”, “people”). Ideally, everything boils down to three to five groups. For each group, find an actionable way to improve that area, then assign each action item to a member of the group. It becomes that person’s responsibility to own it.

Should You Add Conferences To Your Toolbox?

Having done this a couple of times, it’s clear that there are pros and cons. No user testing tool or technique is a cure-all, and conference-based testing is no exception.

Pros

  • Lots of participants. Hundreds at a small conference, thousands at a medium conference, tens of thousands at a large conference. Take your pick.
  • Easy recruiting. Build it and they will come. It helps if you point your laptops into the room, and have the designs clearly displayed on their screens.
  • Enables rapid iteration. You can easily complete five or six tests in an hour or two. Faster if you have multiple test stations.

Cons

  • Chaotic testing environment. You know those quiet usability testing rooms with the mirrored glass? You won’t find those at a conference.
  • Travel required. Unless you’re lucky enough to have a relevant conference in your city, you’ll probably need to fly somewhere. This can be expensive.
  • Difficult timing. Remember those roadmaps I mentioned earlier? If the design phase doesn’t line up with a conference, find a different way to get the research you need.

In general, this approach works well when you have a predictable product roadmap. If you know what you’re going to be building, and when, you can time the design phase to coincide with one or more conferences.

On the other hand, if you need the flexibility to run tests at a moment’s notice, this approach won’t work well. In that situation, I recommend having a dedicated room for testing at your company, containing all the equipment you’ll need.

Tips To Make This Work For You

If you’ve read this far and think conference-based user testing is right for you, great! Here are some tips to help you succeed.

  • Pick a conference five months in advance. You don’t have to know exactly what you’ll be testing, but it’s a good idea to have a target date and venue in mind, so you can start thinking about it.
  • Pick a conference with people who don’t know you exist Because we ran testing at our own conferences, everybody knew about us. This self-selection bias prevented us from getting a good cross-sample of our potential market.
  • Don’t pick a booth in the busiest hallway As tempting as it might be to get maximum visibility, ask youself if the additional chaos is worth it. In 2013, we picked a booth in a room separated by a half wall from a busy hallway. As a result, we had good visibility without being in the middle of the chaos.
  • Don’t write every study yourself The first year, I wrote four of the five user studies. As a result, they were difficult to facilitate and didn’t result in actionable data. It’s time consuming to write a good user test that validates your hypotheses and is easy to facilitate.
  • Don’t schedule people in advance When your testing stations fill up, it’s tempting to start a waiting list. Don’t do that. You’ll become beholden to the list and have to turn people away, even when there appear to be empty test stations. Be serendipitous about it.
  • Practice running each test on each machine before the conference. Murphy’s law. Need I say more?
  • Go forth and user test. The only thing worse than a poor user testing experience is not doing it at all. If you fail, at least you’ll learn how to do it better next time. If you don’t do anything, you’ve learned nothing.

And that’s it. If you have any questions, please get in touch through Twitter or leave a note in the comments below. Thank you for reading.

Resources

When I first proposed conference-based user testing to my team, I was an intern straight out of school. If I could pull this off, so can you. If you’re still intimidated, start small. You can grow your efforts, but you have to start somewhere.

Here are some of the resources we used in testing:

Tools

Articles And examples

Attempt #2: PuppetConf 2013

In 2013, we repeated our user testing experiment. That year, it was held in the Fairmont San Francisco hotel and 1,200 people attended.

Five of us prepared six studies and set up three user testing stations in a room adjacent to a high-traffic hallway. We added dedicated lapel mics and three-ring binders to keep our scripts organized. With the same number of volunteers (16), we ran almost twice as many user tests (95).

This year was vastly more successful than the previous year. We pulled analysis into the event itself, so we got actionable data more quickly than before.

Fairmont San Francisco, the site of our 2013 user testing.
Fairmont San Francisco, the site of our 2013 user testing. (Image credit) (View large version)

Unfortunately, we didn’t go the extra step of iterating on what we learned during the conference. Our product wasn’t affected until months later. It was a step in the right direction, but too slow to be considered agile.

What Did We Learn?

In 2012, we made a large number of mistakes, but we learned from those mistakes, improved our tests and testing process, and doubled both the quality and quantity of the tests in 2013. So, don’t be afraid of failing. A poor user testing experience will only help you learn and improve for next time.

Here are some of my observations from those experiences.

Conferences Let You Cut The Fat Out Of Recruiting

Recruiting is very time-consuming. We have a full-time position on our research team at Puppet for that very purpose. But at conferences, people are already present and willing to engage with you. All you need to do is show up.

In a typical user study, we send out a screener email to 50–100 people in our testing pool. A lot of people won’t respond, and of those who do, only some will meet the requirements for the test. It takes time to get enough valid responses, and sometimes we have to widen the net, which takes more time.

Conferences Let You Validate Your Entire Roadmap

In both years we had more interest in testing than we could facilitate. In 2013, the 95 participants who tested with us were far more than we needed.

If you decide to conduct self-guided, quantitative usability tests, you can run even more tests. In 2014, our research team had over 200 people take a single usability test.

Conferences Are Chaotic, But Process Can Help

In 2012, we had a simple four-stage process: greet, recruit, test, and swag.

  1. Greet Every time someone came to our booth, we had a greeter volunteer who said Hi and told them what we were doing.
  2. Recruit Next, we asked if they wanted to join our Puppet Test Pilot pool for testing opportunities throughout the year. If so, we scanned their badge.
  3. Test If we had a test station available, we asked if they wanted to take a 15–20 minute user test. If so, the greeter introduced the participant to a facilitator at one of the stations.
  4. Swag At the end of the testing, we thanked each participant, and gave them a limited edition T-shirt and a signed copy of Pro Puppet.

This process worked well, but there were a couple of obvious holes. First, we didn’t have a good screening process, so there was no guarantee that a participant was a good match for the tests. Second, we didn’t have a plan to quickly learn from the tests and act accordingly (see: agile).

To correct these shortcomings, we introduced two additional steps in our 2013 testing process: greet, recruit, screen, test, swag and analyze.

  • Screen. At the beginning of the testing process, the facilitator asked the participant six questions, one for each user test. If the answer was yes, we knew they’d be a good match for the test.
  • Analyze. At the end of the testing process, the facilitator filled out a short form. Each user test was allocated a text field, with the study hypotheses alongside. The facilitator entered their notes, and marked the validity of each hypothesis.

Conferences Allow Your Competitors To Snoop

We used NDAs to counteract this. As an unintended side-effect, they made the testing seem more exclusive and special, so participants were eager to sign them.

In 2013, we switched from paper to digital forms, via DocuSign. From a logistical standpoint, this was a great move. We didn’t have to keep track of loose stacks of paper after the conference. On the other hand, the signing workflow was rather cumbersome. People had to sign their initials three times and click multiple times to complete the NDA.

Conferences Are A Great Way To Build User Empathy

Ultimately, user testing is about people, not testing. Both years, we recruited volunteers from non-UX departments within the company: engineering, product, marketing, and sales. It was great to give these people an opportunity to engage with our users over real problems.

And it goes both ways. People love to talk about their job, their pain points, and how your product or service falls short of easing that pain. No, anecdotal data isn’t terribly useful in a design context, but it can help you build a mental model of real-world problems.

Conferences + User Testing Is A Scary Combination

As I mentioned, we recruited volunteers from non-UX teams. Many of those volunteers had never conducted user tests before. It was a nerve-wracking experience for many of them.

In 2013, we instituted a training process to get our volunteers up to speed more quickly. To do this, we instituted a series of training meetings.

In the first meeting, we got everyone in the same room and talked through the testing process and the tests at a high level. Next, we broke up into small groups of two or three people apiece. In these groups, we had volunteers practice facilitating the tests with each other. The test author attended these as well, to spot areas in need of improvement or clarification.

If our volunteers were still nervous about the prospect of user testing, we met with them personally. In some cases, we convinced them to push forward and run user tests anyway. In other cases, we moved them to a less demanding role, usually the role of a greeter.

Conferences Are A Black Hole For Data

In the first year, one of our three test laptops was mysteriously wiped of data. The second year, two of our laptops were stolen. We lost all of the test recordings on those machines.

The silver lining was the post-test analysis we did in 2013. Because our facilitators took such rigorous notes, and saved those notes to the cloud, we retained the data, even though the actual recordings were lost.

Process Is King, But Organization Is Queen

Keeping things digital as much as possible helps. If you must use paper, don’t use manilla folders. Instead, use three-ring binders with labels to keep your papers collated.

On the digital side of things, consider having a single folder where all conference-related documents and data live. Use tools like DropBox or Box to keep everything synchronized across machines. Having local copies is critical, in case the network goes down, which it probably will.

Use Retrospectives To Learn And Improve

After the conference, hold a meeting with the core testing team. For the first five or ten minutes, write ideas on sticky notes. These ideas should take the form of things to stop doing, keep doing, or try doing. Put these stickies on a whiteboard, under the appropriate column (keep, stop, or try).

Once everyone runs out of ideas, pick a volunteer. This person groups the stickies by theme (e.g. “communication”, “process”, “people”). Ideally, everything boils down to three to five groups. For each group, find an actionable way to improve that area, then assign each action item to a member of the group. It becomes that person’s responsibility to own it.

Should You Add Conferences To Your Toolbox?

Having done this a couple of times, it’s clear that there are pros and cons. No user testing tool or technique is a cure-all, and conference-based testing is no exception.

Pros

  • Lots of participants. Hundreds at a small conference, thousands at a medium conference, tens of thousands at a large conference. Take your pick.
  • Easy recruiting. Build it and they will come. It helps if you point your laptops into the room, and have the designs clearly displayed on their screens.
  • Enables rapid iteration. You can easily complete five or six tests in an hour or two. Faster if you have multiple test stations.

Cons

  • Chaotic testing environment. You know those quiet usability testing rooms with the mirrored glass? You won’t find those at a conference.
  • Travel required. Unless you’re lucky enough to have a relevant conference in your city, you’ll probably need to fly somewhere. This can be expensive.
  • Difficult timing. Remember those roadmaps I mentioned earlier? If the design phase doesn’t line up with a conference, find a different way to get the research you need.

In general, this approach works well when you have a predictable product roadmap. If you know what you’re going to be building, and when, you can time the design phase to coincide with one or more conferences.

On the other hand, if you need the flexibility to run tests at a moment’s notice, this approach won’t work well. In that situation, I recommend having a dedicated room for testing at your company, containing all the equipment you’ll need.

Tips To Make This Work For You

If you’ve read this far and think conference-based user testing is right for you, great! Here are some tips to help you succeed.

  • Pick a conference five months in advance. You don’t have to know exactly what you’ll be testing, but it’s a good idea to have a target date and venue in mind, so you can start thinking about it.
  • Pick a conference with people who don’t know you exist Because we ran testing at our own conferences, everybody knew about us. This self-selection bias prevented us from getting a good cross-sample of our potential market.
  • Don’t pick a booth in the busiest hallway As tempting as it might be to get maximum visibility, ask youself if the additional chaos is worth it. In 2013, we picked a booth in a room separated by a half wall from a busy hallway. As a result, we had good visibility without being in the middle of the chaos.
  • Don’t write every study yourself The first year, I wrote four of the five user studies. As a result, they were difficult to facilitate and didn’t result in actionable data. It’s time consuming to write a good user test that validates your hypotheses and is easy to facilitate.
  • Don’t schedule people in advance When your testing stations fill up, it’s tempting to start a waiting list. Don’t do that. You’ll become beholden to the list and have to turn people away, even when there appear to be empty test stations. Be serendipitous about it.
  • Practice running each test on each machine before the conference. Murphy’s law. Need I say more?
  • Go forth and user test. The only thing worse than a poor user testing experience is not doing it at all. If you fail, at least you’ll learn how to do it better next time. If you don’t do anything, you’ve learned nothing.

And that’s it. If you have any questions, please get in touch through Twitter or leave a note in the comments below. Thank you for reading.

Resources

When I first proposed conference-based user testing to my team, I was an intern straight out of school. If I could pull this off, so can you. If you’re still intimidated, start small. You can grow your efforts, but you have to start somewhere.

Here are some of the resources we used in testing:

Tools

Articles And examples

Since doing this, I’ve learned of others who have done user testing at events. Here’s a list of articles with slightly different takes on the process:

Further Reading

Front page image credits: Rosenfeld Media .

Smashing Editorial (il, og, ml, mrn)