Usuario: A practical framework for user research and testing

Alexander Torrenegra
9 min readAug 30, 2018

--

Last updated on Oct 1, 2021.

tl;dr: Usuario is a proven user testing and research framework for new and existing tech products. It details the following types of tests: đŸ‘©â€đŸ’»usability testing for new functionalities, ❀ value testing for new functionalities, 🔩exploratory testing, đŸ“œ analysis of user sessions, 🔁 periodic value tests, ↩ manual regression testing, 🎚 NPS surveys, and â™żïž accessibility testing.

Some time ago, I decided to compile the most important lessons I’d learned about user research and testing for new and existing tech products into one comprehensive framework. I put it to the test with teams at various companies, including Voice123, Bunny Inc., and Torre. After years of tweaks, improvements, and validation, I’d like to make use of this opportunity to share it with you. I call it Usuario.

Usuario describes different types of testing and their frequency depending on the stage of a product. Some tests are meant for new functionalities. Some tests are for existing functionalities. All tests complement each other and will help you reduce potential blind spots.

Stakeholders

Depending on the size of a team, one or several people may be responsible for the tests below. For small teams, the product manager should be responsible for all tests. Larger teams may delegate some of the tests to a product designer, and even larger teams may have a dedicated UX researcher.

Important: This framework doesn’t include testing performed by software developers and test automation engineers. Such tests include automated testing, cross-browser, regression, vulnerability, and performance testing, among others.

Jobs to be done

Clayton Christensen, author of The Innovator’s Dilemma and former Harvard professor, made the case that to understand what motivates people to act, you first must understand what it is they to need to get done. You need to know the why behind the what.

Christensen says:

When people find themselves needing to get a job done, they essentially hire products to do that job for them 


If a [businessperson] can understand the job, design a product and associated experiences in purchase and use to do that job, and deliver it in a way that reinforces its intended use, then when customers find themselves needing to get that job done they will hire that product.

This theory is known as the “Jobs to Be Done” theory (“JTBD”) because it’s built around a central question: what is the job a person is hiring a product to do? What is the job to be done?

Christensen illustrates the JTBD concept in a 5-minute video about milkshakes:

All the tests described below happen in the context of one of multiple jobs-to-be-done. What is the job your user is hiring your product to do?

Deliverables

Depending on the test, it may have up to four types of deliverables:

  1. A testing report: A summary of the test performed.
  2. Bugs: Issues with the product not matching its specs.
  3. Chores: Issues with the product even when the product is working according to specs.
  4. New functionality proposals: Potential improvements for the product manager to consider.

Tests cycle

Description of tests

A. đŸ‘©â€đŸ’» Usability testing for new functionality

Product teams design ways of presenting the functionality of a product so that different types of users can figure out how to actually use it. Usability testing verifies that this is actually happening. It often uncovers missing product requirements, and also — if done well — identifies product requirements that are not as necessary as originally thought.

When developing new products or functionalities, usability testing starts with prototypes. The purpose of the usability prototype is to have something to test on real people. Product managers should plan on multiple iterations before they can come up with a successful user experience. Note that for usability testing purposes, it is perfectly fine if complicated back-end processing is simulated — the key is to evaluate the user experience.

Usability testing should bear cognitive load in mind. A good way of testing for cognitive load is to test with children, seniors, or people who are slightly inebriated.

Right after a new functionality is released, it should be tested again using the actual product. For context, see Experimento: a practical framework for product management.

Ideally, a usability test should be performed with at least ten testers: five using small screens (mobile devices) and five using large screens (desktops). Usability testers can be paid or unpaid. Usability testers don’t necessarily have to be real potential users of your products.

Melissa Gaviria, UX Researcher at Voice123, offering free donuts to attract usability testers!

Consider using services such as userbob.com and usertesting.com for testing with professional user testers. You can even request user testers of senior age (sadly, you can’t request people who have had too much to drink!). Note: these platforms usually clarify that their users aren’t usability experts. While that may be true, such testers do lots of testing and get rated on their work, thus leading to their professionalization. Most of them are probably at level 3 of the Technology Proficiency of the OECD. This means they are in the top 5–8% of the population when it comes to using computers.

In addition to the above — if possible — also perform usability testing with other members of your team. Their input can be insightful.

Even if product managers have product designers or UX researchers in their teams, they need to make sure that they analyze every single usability test.

Marty Cagan’s book, Inspired, offers a wealth of worthwhile information on how to conduct usability testing.

B. ❀ Value testing for new functionalities

Value testing helps you determine how much users value your product and desire to use it. Value testing usually happens right after usability testing. Experimento will provide greater context in this regard.

When developing new products or functionalities, this testing can use the same prototypes used for usability testing. While in usability testing you're determining whether users can figure out how to perform the necessary tasks, in value testing you're determining whether they actually care about those tasks and how well you have addressed them.

Ideally, a value test should be undertaken with at least five real potential users. Value testers should not be paid. Paying them, even with gifts they didn’t expect, will bias their answers for both your current or future tests. Experience has shown that finding value testers is significantly more difficult than usability testers.

Potential users should match your target profile and shouldn’t be friends or family even if they happen to fit the profile. This is because they’re biased, or at the very least they know too much. In your tests, you're looking for honest reactions from real-world users — something you simply can’t get from someone who knows you.

Even if product managers have product designers or UX researchers in their teams, they need to make sure they attend every single value test. As Marty Cagan points out in Inspired: “Testing your ideas with real users is probably the single most important activity in your job as product manager.”

Please refer to Inspired for more information on how to perform value testing.

Daniela Avila and Ana MarĂ­a DĂ­az from Torre, performing a remote value test (thanks, Moritz, for allowing us to use this image)

C. 🔩 Exploratory testing

After the new functionality has been developed and goes live, you can perform exploratory testing. Exploratory testing is all about discovery, investigation, and learning. It emphasizes your personal freedom and responsibility as an individual. Test cases are not created in advance. Instead, you check your product on the fly. You may make a note of ideas about what to test before test execution. The focus of exploratory testing is more on testing as a “thinking” activity.

Under scripted testing, such as manual regression testing discussed below, you design test cases first and later proceed with test execution. Conversely, exploratory testing is a simultaneous process of test design and test execution all done at the same time.

Ideally, exploratory testing is performed in all browsers and on all platforms that have significant market share.

To learn how to perform exploratory testing, check out this guide.

D. đŸ“œ Analysis of user sessions

A lot can be learned from watching real users use your products. There are tools that enable you to watch individual visitors use your product’s site as if you were looking over their shoulders. Tools such as Hotjar and Inspeclet record videos of your visitors as they use your products, allowing you to see everything they do: every mouse movement, scroll, click, and keypress. A lot can be learned from these sessions, including potential usability issues.

If your product site receives many visitors, you should be smart about selecting which sessions to review. A one-click survey that asks users how they experienced using your product can help you identify which users had a negative experience. You can then focus on analyzing those sessions.

E. 🔁 Periodic value tests

This testing is similar to the Value testing described above, but it applies to existing functionality. It requires frequency (usually montly or quarterly). It must be performed on the core interactions and main funnels of each product. By doing this you make sure they are properly optimized.

Periodic value tests should be performed with users at different stages of your funnel. For example:

  • Recently-acquired users
  • Users acquired that got stuck and didn’t activate
  • Recently-activated users
  • Users activated that got stuck and were not retained
  • Retained users
  • Users that were retained but churned

This is also a great time to learn about your competition. You can ask users questions about substitutes for your product and new market entrants.

Remote value test being led by Luisa Moscoso, Bunny Inc’s CEO.

F. ↩ Manual regression testing

Regression testing verifies that your product continues to work correctly even after it’s changed (software enhancements, patches, configuration changes, etc.) or the environment around it changes (new browser versions, server updates, etc.).

During regression testing, new software bugs may be discovered. One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts.

Regression testing can be performed to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change.

Regression testing follows a script. Scripts may come in the form of use cases, test cases, and even wireframes.

Ideally, manual regression tests are performed in all browsers and on all platforms that have significant market share.

For more info, see this Wikipedia article.

Note: Regression testing should also be performed by software developers (or their quality control counterparts) as they change the product. The fact that product managers or UX researchers run these tests doesn’t mean that software developers are excluded from running them as well.

G. 🎚 Analysis of NPS surveys

The Net Promoter Score (NPS) is an index ranging from -100 to 100 that measures the willingness of customers to recommend a company’s products or services to others. It is used as a proxy for gauging the customer’s overall satisfaction with a company’s product or service and the customer’s loyalty to the brand. NPS surveys should include an open question for detractors. Responses to NPS surveys should be normalized and analyzed periodically. Responses from detractors should be forwarded immediately — and ideally, automatically — to your customer success team so it can contact such a user promptly.

If you are compiling your report to share with others, it should include:

  • The specific user-experience stages where you are triggering the NPS surveys.
  • The channels used to gather the data.
  • % of users at that stage who answered the survey.
  • The NPS and, possibly, weighted NPS (where you associate the weight with the lifetime value of the user).
  • The most popular answers submitted by detractors, normalized, and ranked. These answers can then be used to prioritize your product efforts.

Products with multiple user-types will require multiple reports.

To learn more about setting and analyzing NPS, check out this article.

H. â™żïž Accessibility testing

Accessibility testing is performed to ensure that the application being tested is usable by people with disabilities such as impaired hearing and/or color blindness, as well as other disadvantaged groups and the elderly. People with disabilities use assistive technology which helps them in operating a software product.

To learn how to perform usability testing, check out this as well as this article.

Related frameworks

  • Experimento: A practical product management framework
  • Prioridad: A practical framework for product and feature prioritization
  • Indicadores: Performance indicators for online platforms (a template)
  • Canales: A framework for identifying all client acquisition channels
  • And more


Thanks to Abe Duarte, Ana María Díaz, Andrés Cajiao, Andrés Pachón, Daniel García, Daniela Avila, Lucho Molina, Melissa Gaviria, Omar Duque, Rodrigo Herrero, for sharing knowledge with me that is now at the core of this framework. Thanks to Carel F. Cronje for reading and commenting on drafts of this article.

--

--

Alexander Torrenegra
Alexander Torrenegra

Written by Alexander Torrenegra

Focused on making work fulfilling for everyone. CEO/CTO of Torre. Founder of Tribe, Bunny Studio, Voice123, and Emma. Author of Remoter.

No responses yet