Growth Newsletter #317
Over the past few weeks, we've been running a series on growth experimentation. How teams get it wrong, why impact comes from identifying leverage (and not "big swings"), and how to identify your leverage.
The response has been great, and a lot of you asked for more. So we pulled a few pages from our Growth Program, and we're back with a bonus edition: the tactical deep dive.
This week's tactics
Growth experimentation: getting tactical
Insights from Demand Curve's Growth Program.
Experimentation isn't just about A/B testing or conversion rate optimization. Experimentation, as a growth function, is far more important. It's how teams generate learnings.
And why does that matter?
Well, all things equal, the companies that learn fastest typically grow fastest. They're the ones with the most efficient acquisition channels, the deepest understanding of their customers, and the most sought-after products.
Bing changed the way it displayed ad headlines and generated $100 million annually in the US alone. Airbnb tested making listing clicks open in a new browser tab instead of navigating away from search results, producing a 3-4% booking lift and outperforming a machine learning recommendation model that took three months to build.
But here's what most founders get wrong: they confuse speed with impact.
"High-tempo testing" and "move fast and break things" sound great on LinkedIn. In reality, more deliberate experiments yield the most illuminating results.
The 5 principles that matter
1. The growth process never ends
Everything changes. Industries come and go. Customer behavior shifts. New channels emerge while legacy ones become saturated. And if you're a Growth Program member, you know that growth is largely a function of not just maintaining, but strengthening, the alignment of your "Foundational Five." i.e. the alignment (or fit) amongst your market, product, model, channel, and brand. And to do so, you must constantly generate new learnings. Experimentation is how that happens.
2. Learning > "wins"
Around 90% of experiments (read: A/B tests) result in so-called failure. Meaning, the test didn't beat the control. But that's the wrong mindset. A conversion rate lift is nice, but only if it teaches us something. Our "losing" tests can be packed with valuable learnings yet we watch teams toss them aside in search of the next "winner."
Hereâs how to redefine failure and success:
- Failure: An experiment that didn't produce a learning.
- Success: One that did.
A sloppy test is a fail. An inconsequential test is a fail. But one that produces a learning and gets us one step closer to discovering where our real leverage lies... that's a major win.
3. Prioritize ruthlessly.
Large businesses can test everything. You canât. Resources, time, and traffic are limited. You need minimum viable tests (MVTs)âtests designed to be as efficient as possible without sacrificing integrity. Simple tests with clear results and learnings.
Think impact and efficiency.
4. Speed and size are not the goals. Leverage is.
âBigâ changes donât guarantee big results. Itâs often the smallest changes that have the biggest effects. The Obama campaign in 2007-2008 tested minor variations to its splash page. Those simple changes resulted in 40.6% more signups.
We went deep on this in newsletter #311. If you missed it, might be worth a read.
5. Focus on the fundamentals. Velocity and scale will follow.
Over time, you can scale up your experimentation program. Netflix and Uber didnât start out running thousands of experimentsâthey grew to that point. Getting there requires a strong experimentation infrastructure, a culture that promotes curiosity, and a cross-company commitment to experimentation
The 6-step process that works
Step 1: Identify problems and opportunities
Your north star metric is a lagging indicator. It's the output you're ultimately trying to move. But you can't experiment on an output directly. "Increase total bookings" doesn't tell you what to actually do.
So you break it down into its inputs. The levers that influence it. These are the things you can actually test and improve: signup conversion rate, activation rate, time-to-first-booking, day-7 retention. If you improve the right levers, the north star follows.
Look at your data. Where are you below your targets? Where do you see drop-offs? Focus on the levers with the most influence on your north star. That's where experiments will have the highest impact.
Create problem statements as succinctly as possible:
- âWeâre not getting enough signups through our homepageâ
- âOur first-week churn is too highâ
- âOur sales cycle is taking too longâ
If you read Parts 1-3 of this series, youâll recognize this as Problem Discovery â and you know it deserves far more than the quick recap we're providing here. If your team is moving quickly, it feels like there's a lot of activity, etc. but the results aren't there, it's highly likely you have a problem discovery... well, problem. If so, check out the rest of the series if you missed it.
Step 2: Hypothesize
Now that you know what the problems are, why do they exist? Use evidence to create informed hypotheses.
The best evidence comes from three sources:
Customers: Practice empathy. Use customer surveys, user interviews, usability tests, heatmaps, and session recordings. Your customers are why youâre experimenting.
Team: Practice humility. Use your data, teammatesâ knowledge, and past experiments. With the latter being given the most weight.
Market: Practice diligence. Study what growth tactics other companies have implemented. If they have strong experimentation programs, what you see are likely outcomes of disciplined testing. If they serve the same market and job-to-be-done (even if they go about it in a different way), their learnings may be highly applicable to you.
A good hypothesis is testable, has clear cause and effect, is measurable, and includes both a belief statement and prediction:
âWe believe customers arenât re-ordering because they have to do it manually and forget. If we add a âsubscribe and saveâ feature, repeat orders will increase.â
Step 3: Evaluate & prioritize
Not everything needs testing. Separate test candidates from Nike candidatesâthings you can just do:
- Anything broken (fix bugs, donât test them)
- Anything thatâs low-effort, low-impact, and low-reach
- Anything (truly) highly urgent
For everything else, we recommend a modified version of the RICE framework to prioritize:
- Reach: How many people experience what youâre testing?
- Impact: How much could this move the needle?
- Cost: How long will the test take, and will it cost money?
- Evidence: How strong is your evidence, and how much do you have?
For reach and impact, think broad strokes. Don't get caught up trying to be precise. Cost and evidence, on the other hand, are worth a bit more scrutiny.
Side note: You might see other variations of RICE out there. We use Cost instead of Effort. Effort makes people think narrowly about labor hours, but the real cost includes monetary spend, opportunity cost, and how long the experiment takes to run. Second, we use Evidence instead of Confidence. Confidence invites subjectivity. Evidence forces you to actually have something to point to.
For most teams, keep it simple: score each factor from 1-10, add them up, and sort by total score. Yes, there are more sophisticated frameworks. At some point, we'll write about the ones we use with our later-stage clients. But they're overkill for most.
Step 4: Design minimum viable tests (MVTs)
MVTs are the most efficient test you can run to get the learnings you need. Before jumping to âletâs build an MVP and A/B test it,â ask:
- Whatâs our risk if this hypothesis is wrong?
- How much certainty do we need from our test?
- What does the ideal experiment look like, and how expensive is it?
For high-risk hypotheses, take a step-wise approach: start with cheap experiments like customer interviews, validate, then move to slightly more expensive tests. For lower-risk changes, you might run usability sessions instead of a full A/B test and implement immediately based on clear feedback.
Common MVT types include:
- A/B tests (highest certainty, most resource-intensive)
- Customer research (surveys, interviews, usability tests)
- Landing pages with waitlists
- Prototypes paired with customer research
- Ads plus landing pages to gauge interest
Additional tip: For A/B tests specifically, use an online calculator to determine sample size. The key input is your minimum detectable effect (MDE): the smallest change that would make the experiment worthwhile. Donât try to guess your actual impact; just determine what would be meaningful enough to pursue.
Step 5: Launch
Build your test, make sure everything is tracked properly, then launch.
Should you peek at results early? Technically noâit risks false positives/negatives. But do it anyway. Looking at results early ensures you catch anything critically broken. Just do it intelligently:
- Check 1 day after launch purely to identify bugs
- For tests under a week, donât peek beyond that initial check
- For longer tests, peek at the halfway mark, but only call it if you're seeing extreme results. If you're not sure what extreme results means, don't end your tests early.
When your test is ready, document the results. Leave interpretation for the next step.
Step 6: Learn & apply
Learn from your results by asking:
- Why do we think it worked or didnât work?
- What else do we need to know to get closer to the "why?"
- How far off were we from predicted impact?
- Were other metrics affected in ways that counter the success or failure?
- What does this tell us about our usersâ preferences and behavior? Are we sure we're making the right conclusion?
- Was our north star metric affected?
Segment your findings. Looking at specific segments might reveal something you wouldnât see from aggregate data. A homepage test might show the same overall results, but segmenting by traffic source could reveal social traffic converted 10% worse while organic search converted 20% better.
Then apply what youâve learned:
- Implement changes based on positive results
- Revise hypotheses
- Design new experiments to uncover deeper learnings
- Improve your processes and infra as gaps and weaknesses are exposed
- Revise strategies and tactics based on highest-conviction learnings
Finally, share your findings. Especially at a startup, everyone should have access to your experimentation programâs tests and findings. Use centralized documentation, regular Slack updates, or team meetings. Transparency and shared knowledge compound as you grow.
Wrapping up
Experimentation gives you reliable insights that, when applied strategically, accelerate growth and the formation of moats. The inverse is also true: If you donât run experiments, you likely wonât grow as quickly. You wonât produce learnings, and you might lose an edge to competitors.
If you're just getting started, remember: slow down in the short term to move faster long term. Keep it simple. Focus on quality over quantity.
And most importantly, that the goal is to produce learnings, not "wins." The companies and teams that learn the most, all things held equal, will eventually win.
âJustin Setzerâ
Demand Curve Co-Founder & CEO





