What our design team learned running remote usability tests
Aline Silveira
March 9, 2020
<p>Design has been integrated into the product teams at our company for a couple of years now. Still, it took us a while to reach a significant milestone in our practice: testing our features with real users and learning from that to build better experiences.</p><p>Since then, usability tests have become a crucial tool in our design process. In this article we go through the seemingly daunting questions that our team had about the entire UX testing process, and what we've been learning from our recent experiences.</p><h2 id="why-is-testing-interfaces-so-important">Why is testing interfaces so important?</h2><p>Testing a design solution with actual, real-life users can<strong> confirm or dismantle the product team's ideas about the value we are delivering, </strong>and also <strong>show clearly where that value lies</strong>. A well-built test can shift the entire product strategy, saving the team from developing something that people don't care about, or that doesn't help them reach the desired outcome. And this is the kind of discovery that every product team should make sooner rather than later.</p><p>Additionally, running a UX test can <strong>reveal unexpected aspects</strong> of the users' context and the product itself. Every time we ran a usability test, we walked out of it with a lot more insights than we had expected, sometimes even related to other parts of our products.</p><p>Another significant benefit of testing our UIs is that<strong> it's more cost-effective to reshape the design solution before implementing it</strong> than it is to fix the product when it's up and running. Development costs are the obvious reason, but there's also the cognitive cost of teaching our users a better way to reach an outcome because we didn't get it right the first time.</p><p>After testing a feature, <strong>the team gets on the same page about it</strong>. Sometimes it's hard to defend the value of a particular design decision to our business and engineering counterparts because the team also needs to consider technical viability and product strategy. But when we all see a test participant struggling with an interaction, it's easier to rally the entire team to solve their problem.</p><h2 id="is-remote-testing-really-better-than-face-to-face">Is remote testing really better than face to face?</h2><p>As a software studio based in South America that caters to clients distributed across North America, testing interfaces remotely was not much of a choice for Vinta design team. At first, setting everything up and having to rely on various technological moving parts to run our tests seemed like a big challenge. But after our first experience, the benefits of remote testing started to become clear to us:</p><ul><li><strong>Testing remotely is more agile-friendly. </strong>In other words, the whole process tends to go faster, as the leading time to schedule participants is reduced dramatically. Our team can run a usability test from scratch (build the prototype, source, and schedule participants, run the test and analyze results) in just two 2-week sprints.</li><li><strong>The lab setup is simpler.</strong> We don't need a one-way mirror room, or multiple cameras filming the participant's face and the screen they're using. Their device, paired with <a href="https://lookback.io/">Lookback</a>, our testing tool of choice, already does the job. Lookback also allows multiple people from the team to watch and comment on the test as it's happening,</li><li><strong>Participants tend to act more naturally</strong>. A UX test with a prototype always has a component of role-playing, because the users' actions have no real-life consequences, and they might be dealing with static information and placeholder data that wouldn't exist in the real product. In our experience, having users in their regular environment, with their own device, can make the experience feel more natural and relaxed than if we invite them over to an in-person test in a "research lab" setting.</li><li><strong>It's easier to source participants</strong>, not only because the geographic barrier is gone, but also because their time is less compromised. It's way more comfortable for a person to find time on their schedule for a 20-minute call (and spend a few minutes installing an app on their phone) than to physically go to a software studio or a research lab.</li><li><strong>It allows for a more geographically diverse subset of participants</strong>, which can be great if we want to cross-examine cultural differences. For the particular context of web products with an international audience, a test with a diverse participant pool can help the team make sure that no parts of the interface get lost in translation.</li><li><strong>It's easier to follow a protocol. </strong>Seasoned researchers might disagree with me on this. Still, I always found it easier to distance myself emotionally from participants, follow a script, and act more professional if I'm doing everything remotely. Having a screen barrier has its downsides (we get limited feedback on our participants' body language), but for our team, the pros outweigh the cons by a landslide.</li></ul><p>There are some<a href="https://articles.uie.com/remote_usability/"> great articles</a> that <a href="https://www.sangereby.com/ideas-blog/blog-pages/user-testing-do-i-need-to-be-there/">go deeper</a> into the benefits and challenges of running remote UX tests. If your team has the choice between remote and in-person, and you want to get a broader perspective, I highly recommend further reading.</p><h2 id="when-is-it-worth-to-run-an-ux-test">When is it worth to run an UX test?</h2><p>The answer most designers are probably expecting to read here is <strong>always</strong>. But at Vinta, we are die-hard pragmatists, and the fact is that running a UX test requires full dedication from the designer, and some work and available time from other stakeholders. It's a tool that should be used strategically when we need to figure out key aspects of the product.</p><p>So what we do is analyze the feature we are working on, answering the following questions. If the answer is <strong>YES for at least two of them</strong>, we know that our design solution will benefit significantly from testing.</p><ul><li>Is this feature crucial for the core-business?</li><li>Is this feature a great technical challenge?</li><li>Are we proposing a dramatic change in flow and functionality?</li><li>Are we proposing something completely new, that our users might not be familiar with?</li><li>Are we proposing changes that might be a high risk for the business?</li></ul><p>After that, we have one final question, that must be answered with full honesty:</p><ul><li>Does the product designer have at least 2 sprints to build and run the test without interruptions?</li></ul><p>If we already know that testing is essential, but the timing is not perfect, the other designers in our team make an effort to shield who will be in charge of running the test. We hold the fort for this 2-sprint period, knowing that all of us will learn a lot from that experience.</p><h2 id="what-will-the-test-achieve-for-the-product">What will the test achieve for the product?</h2><p>One big mistake our team made on our very first usability test was to try to validate too many aspects of the experience at once. The client required a significant change in user flow, which would have a considerable technical impact, and that high risk is what ultimately "sold" them the idea of running a test, to make sure we were on the right track. It was also our first chance to run a remote UX test, and there were lots of things we wanted to find out (both about the product and about how to run usability tests).</p><p>The problem was that without a clear focus on what the team needed to discover or validate, it became tough to narrow down the test scope. Our first test was bloated with too many tasks, a complex prototype that took too long to build, and so many different user journeys that it was nearly impossible to normalize our participants' results.</p><p>We saw these issues halfway into building the test but decided to go through with it because we needed to have this experience under our belt, and the product owner wouldn't reduce the feature's scope. And even then, we learned a lot about the product and got valuable insights from our users that we wouldn't be able to catch otherwise. In the end, a poorly-built test is better than no test at all. But the most valuable lessons we got from this first experience was to <strong>start with a solid hypothesis</strong>, <strong>reduce the prototype scope</strong> to the strictly necessary, and <strong>maintain a laser-focus</strong> throughout the experiment.</p><h3 id="how-to-formulate-a-test-hypothesis">How to formulate a test hypothesis</h3><p>The test hypothesis can vary a lot, depending on the feature you're working on. It will inform what kind of test you should do (moderated vs. unmoderated), what profile of participants you should search for (current users vs. prospects), and which tasks should be built in your prototype.</p><p>Hypotheses should relate to the primary purpose of the feature, while questions should answer design aspects that you have no other way to validate. Try to work on two opposing hypotheses, keeping your mind open to the fact that that the change you're proposing might end up being bad for the product.</p><p>Here's an example: our team was working on a feature to change the first-purchase experience for a healthy meal subscription plan (a B2C product), with the goals of<strong> making clearer which products were being purchased</strong> and <strong>reducing requests for product replacements after it</strong>. It was a big change in a core experience, and represented a high risk for the business. The solution we were exploring involved moving from a <em>"choose between plans 1, 2, and 3" </em>flow into a <em>"build-your-box" </em>experience.</p><p>These were the hypotheses we came up with:</p><ul><li><em>Hypothesis A:</em> changing the signup flow from a choice between three plans into a box that you can fill with products brings users closer to the product, increasing engagement in the purchasing process.</li><li><em>Hypothesis B: </em>changing the signup flow from a choice between three plans into a box that you can fill with products generates decision-fatigue and causes users to drop-off before completing their purchase.</li></ul><p>And these were the questions:</p><ul><li>Is the UI easy to understand, enabling users to go through the task without a lot of back-and-forths?</li><li>How much time do users spend in the process?</li><li>Are we communicating clearly the different actions/options available?</li><li>Which flow are users most likely to go through?</li><li>Does this part of the UI skew users' decision towards one option over the other?</li></ul><p>Keep in mind that other insights will arise during the test sessions that might not be related to your original questions. Take note of everything (ideas that were not on your radar can prove to be very useful in the future), but remember the purpose of the experiment when you're consolidating the results.</p><h2 id="what-kind-of-test-should-we-run">What kind of test should we run?</h2><p>There are two possible ways to conduct a remote UX test: moderated and unmoderated. As <a href="https://www.nngroup.com/articles/remote-usability-tests/">NN/g defines</a>:</p><blockquote><strong>Moderated</strong> sessions allow for back and forth between the participant and facilitator, because both are online simultaneously. Facilitators can ask questions for clarification or dive into issues through additional questions after tasks are completed.</blockquote><blockquote><strong>Unmoderated</strong> usability sessions are completed alone by the participant. Although there is no real-time interaction with the participant, some tools for remote testing allow predefined follow-up questions to be built into the study, to be shown after each task, or at the end of the session.</blockquote><p>This article goes into detail on how <a href="https://www.nngroup.com/articles/moderated-remote-usability-test/">moderated</a> and <a href="https://www.nngroup.com/articles/unmoderated-user-testing-tools/">unmoderated</a> tests work. I recommend any designer who is in the process of choosing a test technique to read their analysis. We've had the experience of running both kinds of tests in Vinta, and these are the pros and cons that we found more significant:</p><ul><li><strong>Moderated tests</strong> give us more control over the process and the ability to stimulate participants to speak their minds while performing a task. The team feels closer to the users' pain points when we're chatting with them, so this type of test is more likely to affect real change in the product. On the flip side, moderated tests require a lot more time and effort from the designer, and scheduling all the participants in a narrow timeframe can prove to be a challenging logistic puzzle.</li><li><strong>Unmoderated tests</strong> are highly scalable because once the designer has set them up, they're entirely self-serve on the participants' side. Data from these tests tends to be more objective and easier to go through, and the team can digest results more quickly. However, participants hardly think aloud during this type of test, because there's no one to interact with, only a set of instructions. And if technical difficulties happen, the person is likely to drop-off rather than ask for help.</li></ul><p>Knowing these "built-in features" of each type of test, we look at the hypothesis, how advanced our design solution is, the timeframe that we have to run the test, and the goals of the experiment. Our decision-making process works like this:</p><h3 id="we-run-a-moderated-test-if-">We run a moderated test if:</h3><ul><li>We are exploring a new concept and need to understand how users react to it.</li><li>We want to test a user flow still in its early stages (wireframe or lo-fi prototype).</li><li>We want to impact other stakeholders with opinions and insights from real users.</li></ul><h3 id="we-run-an-unmoderated-test-if-">We run an unmoderated test if:</h3><ul><li>We want to measure efficiency in a repetitive task.</li><li>We want to know if users can understand UI components quickly and use the expected paths.</li><li>We want to sort content in the UI.</li><li>We want to know which part of the interface is more critical to key users, and some numbers might help us make an informed decision.</li></ul><h2 id="how-do-we-start">How do we start?</h2><p>After our team knows which type of test we want to run, there's a lot of work to do: build the prototype, source and schedule participants, run the sessions (in case of moderated tests) and analyze results together with the team.</p><p>Sometimes it's hard to know where and how to start such a big assignment, so our team decided to handle UX testing in the way we do any of our larger design challenges: breaking up tasks into checklists. We've built<a href="https://devchecklists.com/design-moderated-ux-test/"><strong> one for moderated</strong></a> and <a href="https://vinta.atlassian.net/wiki/spaces/design/pages/68584078/Remote+Usability+Test+-+Unmoderated?atlOrigin=eyJpIjoiODhkMzc0OTJlNzcxNDdhNWFmZTYzZjRhMzZhMzNiMGIiLCJwIjoiYyJ9"><strong>one for unmoderated</strong></a><strong> </strong>tests. These checklists help us remain grounded and don't lose focus on what we're trying to achieve with the usability test.</p><p>I hope that reading about our experience with remote UX testing can motivate other design teams to give it a try. Good luck and happy testing!</p><p>Huge thanks to <a href="https://twitter.com/pjbacelar">Pedro Bacelar</a>,<a href="https://twitter.com/laisvarejao"> Lais Varejão, </a>and <a href="https://www.linkedin.com/in/laudlemos/">Laura Lemos</a> for their contributions to this post.</p>
Join the Tech Forward newsletter
Stay ahead of the curve with our latest trends about web development.
By clicking “Accept all”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. Check our privacy policies.