Usuario: A practical framework for user research and testing
Last updated on Oct 1, 2021.
tl;dr: Usuario is a proven user testing and research framework for new and existing tech products. It details the following types of tests: đ©âđ»usability testing for new functionalities, â€ïž value testing for new functionalities, đŠexploratory testing, đœ analysis of user sessions, đ periodic value tests, â©ïž manual regression testing, đ NPS surveys, and âżïž accessibility testing.
Some time ago, I decided to compile the most important lessons Iâd learned about user research and testing for new and existing tech products into one comprehensive framework. I put it to the test with teams at various companies, including Voice123, Bunny Inc., and Torre. After years of tweaks, improvements, and validation, Iâd like to make use of this opportunity to share it with you. I call it Usuario.
Usuario describes different types of testing and their frequency depending on the stage of a product. Some tests are meant for new functionalities. Some tests are for existing functionalities. All tests complement each other and will help you reduce potential blind spots.
Stakeholders
Depending on the size of a team, one or several people may be responsible for the tests below. For small teams, the product manager should be responsible for all tests. Larger teams may delegate some of the tests to a product designer, and even larger teams may have a dedicated UX researcher.
Important: This framework doesnât include testing performed by software developers and test automation engineers. Such tests include automated testing, cross-browser, regression, vulnerability, and performance testing, among others.
Jobs to be done
Clayton Christensen, author of The Innovatorâs Dilemma and former Harvard professor, made the case that to understand what motivates people to act, you first must understand what it is they to need to get done. You need to know the why behind the what.
Christensen says:
When people find themselves needing to get a job done, they essentially hire products to do that job for them âŠ
If a [businessperson] can understand the job, design a product and associated experiences in purchase and use to do that job, and deliver it in a way that reinforces its intended use, then when customers find themselves needing to get that job done they will hire that product.
This theory is known as the âJobs to Be Doneâ theory (âJTBDâ) because itâs built around a central question: what is the job a person is hiring a product to do? What is the job to be done?
Christensen illustrates the JTBD concept in a 5-minute video about milkshakes:
All the tests described below happen in the context of one of multiple jobs-to-be-done. What is the job your user is hiring your product to do?
Deliverables
Depending on the test, it may have up to four types of deliverables:
- A testing report: A summary of the test performed.
- Bugs: Issues with the product not matching its specs.
- Chores: Issues with the product even when the product is working according to specs.
- New functionality proposals: Potential improvements for the product manager to consider.
Tests cycle
Description of tests
A. đ©âđ» Usability testing for new functionality
Product teams design ways of presenting the functionality of a product so that different types of users can figure out how to actually use it. Usability testing verifies that this is actually happening. It often uncovers missing product requirements, and also â if done well â identifies product requirements that are not as necessary as originally thought.
When developing new products or functionalities, usability testing starts with prototypes. The purpose of the usability prototype is to have something to test on real people. Product managers should plan on multiple iterations before they can come up with a successful user experience. Note that for usability testing purposes, it is perfectly fine if complicated back-end processing is simulated â the key is to evaluate the user experience.
Usability testing should bear cognitive load in mind. A good way of testing for cognitive load is to test with children, seniors, or people who are slightly inebriated.
Right after a new functionality is released, it should be tested again using the actual product. For context, see Experimento: a practical framework for product management.
Ideally, a usability test should be performed with at least ten testers: five using small screens (mobile devices) and five using large screens (desktops). Usability testers can be paid or unpaid. Usability testers donât necessarily have to be real potential users of your products.
Consider using services such as userbob.com and usertesting.com for testing with professional user testers. You can even request user testers of senior age (sadly, you canât request people who have had too much to drink!). Note: these platforms usually clarify that their users arenât usability experts. While that may be true, such testers do lots of testing and get rated on their work, thus leading to their professionalization. Most of them are probably at level 3 of the Technology Proficiency of the OECD. This means they are in the top 5â8% of the population when it comes to using computers.
In addition to the above â if possible â also perform usability testing with other members of your team. Their input can be insightful.
Even if product managers have product designers or UX researchers in their teams, they need to make sure that they analyze every single usability test.
Marty Caganâs book, Inspired, offers a wealth of worthwhile information on how to conduct usability testing.
B. â€ïž Value testing for new functionalities
Value testing helps you determine how much users value your product and desire to use it. Value testing usually happens right after usability testing. Experimento will provide greater context in this regard.
When developing new products or functionalities, this testing can use the same prototypes used for usability testing. While in usability testing you're determining whether users can figure out how to perform the necessary tasks, in value testing you're determining whether they actually care about those tasks and how well you have addressed them.
Ideally, a value test should be undertaken with at least five real potential users. Value testers should not be paid. Paying them, even with gifts they didnât expect, will bias their answers for both your current or future tests. Experience has shown that finding value testers is significantly more difficult than usability testers.
Potential users should match your target profile and shouldnât be friends or family even if they happen to fit the profile. This is because theyâre biased, or at the very least they know too much. In your tests, you're looking for honest reactions from real-world users â something you simply canât get from someone who knows you.
Even if product managers have product designers or UX researchers in their teams, they need to make sure they attend every single value test. As Marty Cagan points out in Inspired: âTesting your ideas with real users is probably the single most important activity in your job as product manager.â
Please refer to Inspired for more information on how to perform value testing.
C. đŠ Exploratory testing
After the new functionality has been developed and goes live, you can perform exploratory testing. Exploratory testing is all about discovery, investigation, and learning. It emphasizes your personal freedom and responsibility as an individual. Test cases are not created in advance. Instead, you check your product on the fly. You may make a note of ideas about what to test before test execution. The focus of exploratory testing is more on testing as a âthinkingâ activity.
Under scripted testing, such as manual regression testing discussed below, you design test cases first and later proceed with test execution. Conversely, exploratory testing is a simultaneous process of test design and test execution all done at the same time.
Ideally, exploratory testing is performed in all browsers and on all platforms that have significant market share.
To learn how to perform exploratory testing, check out this guide.
D. đœ Analysis of user sessions
A lot can be learned from watching real users use your products. There are tools that enable you to watch individual visitors use your productâs site as if you were looking over their shoulders. Tools such as Hotjar and Inspeclet record videos of your visitors as they use your products, allowing you to see everything they do: every mouse movement, scroll, click, and keypress. A lot can be learned from these sessions, including potential usability issues.
If your product site receives many visitors, you should be smart about selecting which sessions to review. A one-click survey that asks users how they experienced using your product can help you identify which users had a negative experience. You can then focus on analyzing those sessions.
E. đ Periodic value tests
This testing is similar to the Value testing described above, but it applies to existing functionality. It requires frequency (usually montly or quarterly). It must be performed on the core interactions and main funnels of each product. By doing this you make sure they are properly optimized.
Periodic value tests should be performed with users at different stages of your funnel. For example:
- Recently-acquired users
- Users acquired that got stuck and didnât activate
- Recently-activated users
- Users activated that got stuck and were not retained
- Retained users
- Users that were retained but churned
This is also a great time to learn about your competition. You can ask users questions about substitutes for your product and new market entrants.
F. â©ïž Manual regression testing
Regression testing verifies that your product continues to work correctly even after itâs changed (software enhancements, patches, configuration changes, etc.) or the environment around it changes (new browser versions, server updates, etc.).
During regression testing, new software bugs may be discovered. One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts.
Regression testing can be performed to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change.
Regression testing follows a script. Scripts may come in the form of use cases, test cases, and even wireframes.
Ideally, manual regression tests are performed in all browsers and on all platforms that have significant market share.
For more info, see this Wikipedia article.
Note: Regression testing should also be performed by software developers (or their quality control counterparts) as they change the product. The fact that product managers or UX researchers run these tests doesnât mean that software developers are excluded from running them as well.
G. đ Analysis of NPS surveys
The Net Promoter Score (NPS) is an index ranging from -100 to 100 that measures the willingness of customers to recommend a companyâs products or services to others. It is used as a proxy for gauging the customerâs overall satisfaction with a companyâs product or service and the customerâs loyalty to the brand. NPS surveys should include an open question for detractors. Responses to NPS surveys should be normalized and analyzed periodically. Responses from detractors should be forwarded immediately â and ideally, automatically â to your customer success team so it can contact such a user promptly.
If you are compiling your report to share with others, it should include:
- The specific user-experience stages where you are triggering the NPS surveys.
- The channels used to gather the data.
- % of users at that stage who answered the survey.
- The NPS and, possibly, weighted NPS (where you associate the weight with the lifetime value of the user).
- The most popular answers submitted by detractors, normalized, and ranked. These answers can then be used to prioritize your product efforts.
Products with multiple user-types will require multiple reports.
To learn more about setting and analyzing NPS, check out this article.
H. âżïž Accessibility testing
Accessibility testing is performed to ensure that the application being tested is usable by people with disabilities such as impaired hearing and/or color blindness, as well as other disadvantaged groups and the elderly. People with disabilities use assistive technology which helps them in operating a software product.
To learn how to perform usability testing, check out this as well as this article.
Related frameworks
- Experimento: A practical product management framework
- Prioridad: A practical framework for product and feature prioritization
- Indicadores: Performance indicators for online platforms (a template)
- Canales: A framework for identifying all client acquisition channels
- And moreâŠ
Thanks to Abe Duarte, Ana MarĂa DĂaz, AndrĂ©s Cajiao, AndrĂ©s PachĂłn, Daniel GarcĂa, Daniela Avila, Lucho Molina, Melissa Gaviria, Omar Duque, Rodrigo Herrero, for sharing knowledge with me that is now at the core of this framework. Thanks to Carel F. Cronje for reading and commenting on drafts of this article.