Experimento: A practical product management framework
Last updated on Oct 1, 2021.
tl;dr: Experimento is a proven product management framework based on experiments. Experiments go through these steps: 👁 observe, 💡ideate, 🗺 roadmap, 🔎 understand the opportunity, 🤔 hypothesize, 🔬 analyze alternatives, ✋ test demand, 📐 design, 🛠 prototype, 👩💻 test usability, ❤️ test value, 📝 spec, ⌨️ develop, 👨💻 validate usability (again), 📣 evangelize, 📈 analyze, ↩️ unship, 💬 share, and 🔄 repeat. Depending on the experiment, steps can be skipped. Here's a Trello template.
As anyone who has had anything to do with Silicon Valley will tell you, the ’Web is a volatile environment. Tech entrepreneurs are far more likely to fall than fly. I’d be the first to admit I’ve been lucky over the years, but I’ve also worked hard and become a firm believer in making your own luck. As technical co-founder of several companies, I’ve been able to create numerous online products. As mentor and investor, I’ve assisted in the creation of numerous others.
“So where’s this going?” I hear you ask.
Well, some time ago, I decided to compile the most important lessons I’d learned about product management into one comprehensive framework. I put it to the test with teams at various companies, including Voice123, Bunny Studio, Torre, and Tribe. After years of tweaks, improvements, and validation, I’d like to make use of this opportunity to share it with you. I call it Experimento.
Principles
Experimento borrows from movements and frameworks such as Agile Software Development, Design Thinking, Inspired, Job-to-be-done, Kanban, Scrum, and the scientific method. Experimento is thus based on the following principles:
- The highest priority for product teams is to enable the creation of value through the development and continuous improvement of their products.
- Product development should follow the scientific method: question, observe, hypothesize, test, analyze, interpret, share, repeat.
- The best way of making sure that an idea works is working hard at killing it with evidence.
- To tackle ambitious ideas, teams must celebrate failure as much as they celebrate success.
- The product manager is responsible for defining the right product. The tech lead is responsible for building the product right.
- Winning products result from a deep understanding of the user’s needs combined with an equally deep understanding of what’s possible at that moment.
- Functionalities should be validated early on in terms of demand, usability, value, and feasibility.
- Testing ideas with real users is probably the single most important activity a tech team could be engaged in.
- Making ideas tangible always facilitates communication.
- Prototypes need to be truly disposable.
- Product teams should welcome changing specs, even late in development.
- At regular intervals, teams should reflect on how to become more effective, then tune and adjust their behavior as well as roadmaps accordingly.
- Comprehensive product documentation increases bus factor, which — in turn — enables continuity.
- Visualization allows a better understanding of work and workflow.
- Product teams should always limit their work-in-progress, thus reducing waste from multitasking and context switching.
- Simplicity — the art of maximizing the amount of work not done — is essential.
- Product teams should be able to maintain a constant cadence indefinitely.
- Companies should build projects around motivated individuals. They should then give these individuals the tools, autonomy, and purpose they need (as well as trust them) to get the job done.
Stakeholders
Experimento is meant to be used by product teams. A typical product team includes a product manager and a team of engineers. Some teams may also have a product designer, UX researcher, product marketer, etc.
Guidelines
An experiment is the basic unit of development in Experimento. Experiments can be small (moving the location of a button, for example) or big (building the core interaction of a platform from scratch). They can also be chores or new experiences. Experiments should have a limited scope so they can be completed in one to eight weeks. While teams can run multiple experiments in parallel, they should limit the number of experiments running simultaneously to a minimum so they can reduce waste from multitasking and context switching.
In project management theory, projects have constraints on three fronts: scope, time, and costs. Two of them can be fixed and one should be variable. In Experimento, scope and costs are fixed. All experiments must have a clear scope. The cost is limited by the budget and size of the team. Experiments shouldn’t be time bound. With this in mind, it’s important to keep the scope of experiments relatively small, so that they don’t take longer than 12 weeks from understanding the opportunity to completion (see steps below). Experiments lasting longer than 12 weeks run the risk of going through team member changes, thus losing the intuition developed and some of the insights required to make experiments successful.
Experiment workflow
Step 0. 👁 Observe
Before a team can come up with good ideas, it must observe the ‘universe’ (its surroundings and operational environment) carefully. Innovators should have an understanding of their customers and their problems via empirical, observational, anecdotal methods or even intuition (source). Observation, then, consists of:
- Human observation. This helps identify problems that people crave to have solved for them. The most painful problems may trigger some of the most powerful ideas. Human observation tends to lead to evolutionary ideas. See this crash course on Design Thinking to find out more about it.
- First Principles Thinking helps boil things down to the most fundamental truths and then reasons up from there. It helps develop our own perspectives on how to solve problems rather than defaulting to the way the rest of the world thinks. It follows by implication that this strategy tends to produce revolutionary ideas. See how Elon Musk uses First Principles.
While ideas may come from product teams through observation, they may also come from other team members, users, and advisors. Customer operations teams (a.k.a. 'customer service', 'customer success', 'account management', etc.) are in a unique position. With the proper mindset and tools, they can systematically compile and analyze user needs and ideas, come up with their own ideas, and feed them to product teams.
Each idea becomes an experiment and goes through multiple steps. At each step, there’s at least one person responsible for the experiment. You keep track of both using a Kanban board.
1. 💡Ideate
A product team can have hundreds of ideas at any given time. A good way of managing ideas is to have a backlog. They represent concrete user functionality, large and small, with which to experiment.
To make it easy for team members to understand the ideas, ideas should be written down as solutions followed by the metric they target. Experiments will usually have a title along the lines of "<solution> to <increase/decrease> <metric>." For example, "Allow clients to download samples to improve search conversation rates." When the product manager decides to pursue an idea, it gets moved to the next step.
2. 🗺 Roadmap
Given that most companies don’t have unlimited time and resources, you have to be smart when prioritizing your efforts. That’s what the product manager does at this stage. Experiments — both chores and new experiences — should be prioritized. As a result of this prioritization, the list of experiments will represent the detailed roadmap of each product. The number of experiments on the roadmap should be long enough to properly communicate to other team members where the product is going, yet short enough so that all items can be reprioritized frequently. To learn more, check out Prioridad, a framework for prioritizing new product functionalities, product design flaws, and bugs.
3. 🔎 Understand the opportunity
While ideas may come from identifying problems (for example, ordering food or getting a cab), other ideas may come from identifying potential innovations (for example, the electricity or the iPhone). For the latter, this step is quite essential. To save time, resources, and only work on ideas that users are likely to find desirable, you can test preliminary desirability by means of a sort of ‘elevator desirability pitch’, where you pitch the value proposition to potential users, add a few details, and then try to elicit any evidence that relates to the questions below:
- What's the job-to-be-done that this product would help potential users get accomplished?
- Do potential users want to use this product?
- Do potential users need to use this product?
- Do potential users seem enthusiastic about this product?
- Would potential users pay for this product?
- How often do potential users envision themselves using this product?
- Is the idea significantly better than current alternatives (competitors or substitutes) that potential users may currently be using?
At the end of this step, you should consider updating the name of the experiment to reflect your findings better. The outcome would be the formulation and validation of the opportunity. If you find enough evidence of value, you continue to the next step. Otherwise, you call the experiment off and continue to the Analyze-step.
The Understand the opportunity-step could be performed by either the product manager or product designer.
4. 🤔 Hypothesize
During this step, a list of hypotheses is created alongside the level at which certain performance indicators should be affected to validate each hypothesis. Of course, an experiment can have any number of hypotheses. This step is important for two reasons: first, it allows you to stay focused. Second, it enables all the members of your team to quickly understand the objectives behind the experiment. For example:
- Hypothesis: Given the chance, clients will download samples.
- Performance indicators affected: A. Conversion rate from search to booking should increase by 10%. B. New performance indicator: Samples downloaded per search session should increase by 10%.
At the end of this step, you should consider updating the name of the experiment to better reflect the hypotheses.
The Hypothesize-step could be performed by either the product manager or product designer.
5. 🔬 Analyze alternatives
At this step, the product designer learns what others companies and products are doing that enable users to experience similar outcomes. This allows you to create something better than the current state of the art. Using the example above, you would learn how other services enable their users to download samples, share samples, download content, etc.
The Analyze alternatives-step could be performed by either the product manager or product designer.
6. ✋ Test demand
In Inspired, Marty Cagan notes: “One of the biggest possible wastes of time and effort, and the reason for countless failed startups, is when a team designs and builds a product — testing usability, testing reliability, testing performance, and doing everything they think they’re supposed to do — yet, when they finally release the product, they find that people won’t buy it.”
The goal behind demand testing is to quickly collect some worthwhile evidence on demand and a list of users who are ready and willing to talk with you about a specific new capability. (There’s a lot more information on how to conduct demand testing in Cagan’s book). If demand testing fails and you want to retry it, you should go back to the Understand the opportunity-step. If you want to call it off, you continue to the Analyze-step.
The Test demand-step could be performed by either the product manager or product designer.
7. 📐 Design
At this step, the user experience of the functionality is created by the product designer. This includes the tasks, navigation, flow, and interfaces required to make your product both usable (users can figure out how to use it) and desirable (users will want to use it). Ideally, the design should follow a design language. The level of detail of the design depends on the team. When in doubt, product managers should err on the side of clarity vs. explaining the obvious.
8. 🛠 Prototype
A prototype is the representation of the proposed user experience and constitutes the next step. Prototypes allow you to test the usability and value of the design before it is fully developed and released. In all cases, prototypes must be disposable. Depending on the experiment, you should determine the type of prototype and tools required to build the prototype:
- Wireframe or rough sketch. They are usually created by the product designer using tools as simple as paper or PowerPoint or as sophisticated as Figma.
- Static mock-up. They are usually created by the product designer. Standard tools include Photoshop, Sketch, or Figma.
- Interactive mock-up. They could include a high level of detail. They are usually created by the product designer or the UX researcher. Standard tools include Proto.io and Figma.
- Working-code prototype. Usually developed by software engineers, it could be as simple as basic HTML and as complex as a deep neural network. Remember: it should be disposable!
You can learn more about prototypes and how to select the appropriate one here and here.
9. 👩💻 Test usability
One of your goals is to design ways of presenting the functionality of the experiment so that different types of users can figure out how to actually use your product. Usability testing verifies that this is happening. It often uncovers missing product requirements, and also, if done well, identifies product requirements that might not be as necessary as you originally thought.
Usability testing should also keep cognitive load in mind. A good way to test for cognitive load is to test with children, seniors, or people who are slightly inebriated.
Ideally, a usability test should be performed with five testers per device type. For example, five testers using small screens (mobile devices) and five using large screens (desktops). Payment can be negotiated, if necessary. Usability testers don’t have to be real potential users of your products.
Consider making use of services such as userbob.com and usertesting.com for testing with professional user testers. You can even request user testers of senior age (sadly, you can’t request intoxicated people!). These services frequently advertise that their testers aren’t usability experts. While that may be true, such testers do a great deal of testing and get rated for their work, which inevitably leads to their professionalization. Most of them are probably at level 3 of the Technology Proficiency of the OECD. This means they are in the top 5–8% of the population when it comes to using computers.
If possible, in addition to the above, also perform usability testing with other members of your team. Their input can be insightful.
For more information on how to perform usability testing, please again refer to Marty Cagan’s Inspired.
If usability testing fails and you want to retry it, you continue to the Design-step. If you want to call it off, continue to the Analyze-step.
Usability testing is usually performed by the product designer or, if available, a UX researcher.
10. ❤️ Test value
Value testing helps us determine how much users value our proposed solution and desire to use it. There are multiple types of value:
- Functional value: What the product does for the user.
- Monetary value: The function of the price relative to the product's perceived worth by the user.
- Social value: The extent to which using the product enables the user to connect with others.
- Psychological value: The extent to which the product allows the user to express themselves or feel better.
When developing new products or functionalities, this testing can make use of the same prototypes used for usability testing. While in usability testing you’re checking if users can figure out how to do the necessary tasks, in value testing you’re checking if they actually care about those tasks, how well you solve them, and how they feel in the process.
Ideally, a value test should be performed with at least five real potential users. Value testers should not be paid. Paying them, or even providing them with unexpected gifts, will bias their answers for current and/or future tests. Do note that — from my own experience — finding value testers is significantly more difficult than finding usability testers.
Potential users should match your target profile and shouldn’t be friends or family even if they happen to fit the profile. This is because they’re biased, or at the very least, they know too much. In your tests, you’re looking for honest reactions and candid feedback from real-world users — something you’ll never get from someone who knows you.
Value testing is usually performed by the product designer or, if available, a UX researcher. Product managers need to ensure they attend every single value test. Writes Cagan: “Testing your ideas with real users is probably the single most important activity in your job as product manager.” (See Inspired for more on value testing).
If value testing fails and you want to retry it, go back to the Hypothesize-step. If you want to call it off, continue to the Analyze-step.
11. 📝 Spec
The prototype offers a good base for the specs that will be delivered to the engineering team. For complex or mature products, however, that may not be enough. In addition to the prototype, some projects can benefit from documenting use cases, user story maps, and/or lists of user interface interactions. They help engineers properly understand the scope of their work and reduce anxiety.
More importantly, detailed specs enable teams to have a high bus factor, which, in turn, enables further product improvements in the long term. Specs allow you to share project knowledge with future members of your tech teams (learn why this is so important). The level of detail should err on the side of clarity vs. explaining the obvious.
Product designers are usually responsible for writing and updating the specs.
12. ⌨️ Develop
At this step, the engineering team works on the architecture, development, testing and release of the functionality based on the design and specs provided. They also ensure that the performance indicators of the experiment are tracked.
The functionality is released live by the engineering team. Sometimes the functionality released is only accessible to the team. At Torre, for example, my team and I call it an internal release. Sometimes the functionality is only released to a subset of users. We call this a canary release or feature-flagged release. When the functionality is released to all users, we call it a full release. Only then can this step be considered complete. Please note, however, that you can perform this step as well as the following one in parallel.
13. 👨💻 Validate usability (again)
This time, usability testing is performed with the actual product instead of the prototype. All issues found should be resolved right away if possible. If not, new experiments should be created for each issue.
14. 📣 Evangelize
Having all members of your team see what you’ve finished validates the team’s hard work, gives the entire company insight into the product, and keeps the evangelism going. At this step, you demo the new functionality to the entire team. Depending on the experiment, this is the time when the marketing team or product marketer announces the functionality to the existing user base.
15. 📈 Analyze
At this step — and after enough time has passed — you analyze the data collected, especially the related performance indicators, and validate or invalidate the hypotheses.
16. ↩️ Unship
If the hypotheses are proven wrong, the functionality should likely be removed (unshipped). Even when the hypotheses are proven correct, there are other reasons to consider removing the functionality. The article Upsides to Unshipping: The Art of Removing Features and Products from Reforge provides eight additional reasons to unship:
Strategy: no longer aligns with product's strategy and direction
Buggy: under delivers on value to customers, with too many issues or not being intuitive
Novelty: has naturally diminishing returns
Niche: engages only a small subset of users
Obsolescence: no longer significant
Redundancy: is competing with another feature that better solves for the customer problem
Incompatibility: works for other products, but not this product
Tech costs: has exorbitant technical costs with maintenance
17. 💬 Share
Finally, you share the analysis with the rest of the team. This allows you to learn, improve, and let future team members learn from your experience.
18. 🔄 Repeat
The Analyze-step, as well as many other steps, are likely to trigger new ideas — and the cycle starts all over again.
Skipping steps
While it may be tempting to systematically skip steps, it’s usually a bad idea. All steps have a reason to exist. I have learned how important some steps are by making expensive mistakes that have cost my companies hundreds of thousands of dollars.
If you opt to skip a step, ensure you only do it on a case-by-case basis and are aware of the consequences. For example, if you skip usability and value testing, you’re performing genius design. While that may be okay when your experiment is small, it may be unwise for complex or core functionalities, even for an experienced team.
Daily updates
All members of the team working on any given day should answer these questions, either in person or via chat:
- What goals did you accomplish yesterday?
- What goals did you miss?
- What top achievements are you planning for today?
- What top achievements are you planning for the next seven days?
By focusing on what each person accomplished yesterday and will accomplish soon, the team gains an excellent understanding of what work has been done and what work remains. While daily updates can be used to determine who is behind schedule, a more important goal is to get team members to make commitments and then help each other.
If a software developer says, “I will finish the data storage module today,” everyone knows that during tomorrow’s meeting, the developer will say whether or not the task was completed. This has the wonderful effect of helping a team realize the significance of these commitments, and that their commitments are to one another, not some far-off user.
Optional tips for daily updates:
- Daily updates should happen early in the morning to avoid interrupting the team’s focus during the rest of the day. This is one of the reasons why remote product teams should be living and working in time zones that are three hours or less apart.
- Celebrate the small victories: mark experiments as ‘done’ while in this meeting, whether they were successful or not. If they were successful, recognize why. If they weren’t, call attention to what got done and how the lessons learned will aid future improvement. Recognize the effort, if only for a moment. Applaud loudly! Celebrating small victories energizes the team.
- Change meeting moderators: Encourage different team members to take on this responsibility. When you facilitate, you’ll gain a different perspective of the process. You’ll also take ownership and funnily enough, suggest more improvements.
- End the meeting on a high note: If you have a team song, team dance, a war cry — perform it. It’ll always elicit a laugh and keep up the good cheer.
Keeping the team busy
In some cases, there will be specialized team members for given steps — software engineers for the Prototyping and Development steps, for example. Attempting to keep all team members busy with experiments all the time will be an impossible task for many reasons:
- At some steps, experiments may continue for longer than originally estimated. For example, an experiment may take too long to prototype, thus leaving the UX researcher without any experiments to run tests on.
- Some tests will be unsuccessful, thus rerouting the experiment to the hypothesis-step (or sometimes ending the experiment right away). For example, an experiment may fail value testing, thus leaving the development team without any experiments to develop.
- Team members may go on vacation, instantly halting progress with the experiments until they return.
Fortunately, there are several ways of handling this:
- First, and foremost, you shouldn’t expect all team members to be working on experiments all the time. With the exception of the product manager and product designer, team members could also have other responsibilities when they aren’t working on an experiment. For example, UX researchers could also be responsible for periodic testing, development teams can pay the technical debt, and so on.
- Try to time the start of experiments so that all team members have something to do. Beware, though: limit to a minimum the number of experiments running simultaneously to reduce waste from multitasking and context switching. Furthermore, attempting to keep all team members busy with experiments all the time may mislead you into considering some tests successful when, in fact, they should have been considered unsuccessful. Being busy is not the same as being productive.
Sign-offs
With the goal of maintaining alignment between the product teams and the company leadership, experiments may require explicit sign-offs at different steps.
Getting started
Here’s a Trello template created by Nicolas Contreras V. Feel free to copy it.
Do you feel like creating a template and sharing it? Please let me know in the comments below.
Related frameworks
- Usuario: A practical framework for user research and testing
- Prioridad: A practical framework for product and feature prioritization
- Indicadores: Performance indicators for online platforms (a template)
- Canales: A framework for identifying all client acquisition channels
- And more…
Thanks to Amaury Prieto, Andrés Pachón, Daniel Garcia, Germán Gonzalez, Jun Loayza, and Lucho Molina for sharing knowledge with me that is now at the core of this framework. Thanks to Carel F. Cronje, Andrés Cajiao, and Juan Manuel Garcia for reading and commenting on drafts of this article. Thanks to Cristian Niño for most of the photos for this article. Thanks to Daniela Avila and María Moya for the diagram of the Experimento workflow.