top of page

Why test?

Test outcomes are the proof in the pudding for your hypotheses. You prove or disprove your assumptions by implementing a simulated or minimal version of your idea, and observing the actual customer reaction to your idea. Technically, you’ve been testing from Day 1 with your MVP, but as the volume of your traffic, users, customers, transactions increases, your testing can produce statistically significant results. What the hell does that mean? When you had only 10 customer, if 2 of them did something different, you didn’t know if this 20% change was worthy of your time and attention, or if those 2 were just odd people. But if you had 1000 customers and 200 of them exhibited a certain behavior, this 20% change should make you sit up and take notice. As your volume of traffic or customers or transactions increases, the more reliable your quantitative test results become. Testing allows you to learn more about your customers through quantitative studies and determine what changes affect your metrics the most.

 

But wait! What am I testing?

In the growth process (link), we saw how we break down the OMTM into its component drivers or levers. We also saw that we brainstorm experiments (used interchangeably with tests here) on what we can do to move these drivers or levers. For example, if you’re trying to improve the conversion rate of a particular page (this is a “lever” to move the overall conversion rate and hence the OMTM), and let’s say you come up with the assumption conversion rate is low because of lack of trust elements on the page. Trust can be in the form of testimonials, in the form of social proof, in the form of free trial or money back guarantee, to suggest a few. The testing would be individually testing each of these suggested components (testimonials, free trial etc.) against the control (the original version) as an A/B test, or test all the variations against the control as a multivariate test (depending on how much traffic you have).

 

How long to run a test?

As long as it takes to get statistically significant results. This is dependent on 3 things:

 

  1. The percent improvement you are aiming for

  2. Your initial starting point

  3. How much traffic you have

 

It requires far less traffic for the testing tool to conclude that Variant A is 30% different than Variant B, and a lot more traffic is required to run through the experiment before the testing tool can claim with confidence (statistical significance) that Variant A is 5% different than Variant B. (same concept as the 2/10 vs 200/1000 in the paragraph above). Now imagine you have variants A, B, C, D, E (multi-variate testing), this would require equal amounts of large traffic to run through each variant before the tool can conclude with confidence a 1% change has occurred. This is why google can afford to test something as small as several shades of blue. Google has the traffic to conclude that test confidently in a few hours. For a startup with 10k visitors a day, creating a 1% change conclusion with confidence might takes several months.

 

The second factor is your starting point. If the starting point was say 90% (a page that’s already converting well), then it does take shorter time to find the 1% improvement, than if your starting point was at a 10% conversion rate.

 

Running more dramatic tests on higher converting pages (usually lower in the funnel) are the quickest tests you can run at a startup. Startups can’t afford slow tests.

 

Learn to Prioritize

Hopefully you brainstormed the hell out of how to test each hypothesis and have a ton of different ideas to test. THE most important thing you can learn as a founder is how to  FOCUS & PRIORITIZE. We already saw that OMTM sets the focus for every member on the team, and you prioritize ALL your efforts & experiments based on the ICE score.

ICE = Impact, Confidence, Ease

Impact is a 1-10 score of the impact (including scalability) a particular effort or experiment will have on your OMTM if this experiment was successful

Confidence is a 1-10 score of how confident you are about the above impact number you came up with. If it’s a total guess, score it low, if it’s something what you’ve done before, score it high. Over a period of time, as you learn more about your customer, team, product, channels etc., your gut or confidence scores get more accurate.

Ease is a 10-1 score on the effort required to prove this. Effort is the time (implementation time for your team + testing time mentioned above)

Add up I+C+E and prioritize the efforts that score the highest

 

Real World Example:

List of 10 A/B tests by Design For Founders

How experimentation helped growthhackers.com with growth


 

Common Mistakes Founders Make:

  • Picking the wrong tests. Re-read description above on why you should be doing dramatic test.

  • Optimizing one part of the funnel independent of the others. Always track impact of optimizing one part of funnel on the bottom of funnel metric (hopefully this is also your OMTM), so you’re not optimizing topically - ex: using baby or kitten pictures can raise your CTR through the roof on FB ads, but if it’s completely irrelevant to your product, none of this traffic will convert to customers

  • Not Testing Enough/Not Testing Often. Even if you have nothing to test, just run a A/A test, you’ll learn something about your audience, testing tools and more.

  • Analysis Paralysis. Although data can be pivotal in making sound decisions, spending too much time analyzing metrics without action backing them can be harmful. Don’t be a deer in the headlights. Test often, execute quickly, and iterate even faster.

Related Keywords:

A/B Testing, Multivariate testing, Statistical significance, KPI’s, Metrics, Measurements, Customer Acquisition, Back-to-Back Testing, Aggregation, Clickstream Analytics

Related Links:

Always be Testing  Hiten Shah, founder Kissmetrics (Video)

13 Ways You're Screwing Up Your A/B Tests  Peep Laja, Founder at ConversionXL

A/B Testing Ron Schneidermann, CMO at AllTrails (video)

Measuring for Impact: Knowing When, What & How to A/B Test  Mike Greenfield, Co-Founder & CEO at Laserlike

Testing Features, Debunking "Best Practices," & Rethinking the Org Chart  Aihui Ong, Founder & CEO at Love With Food

Mind Over Matter: Tactics for Testing Assumptions & Increasing Conversion  Tiffany daSilva, Head of CRO at Shopify

How to grow your business using experimentation  Hiten Shah , Founder Kissmetrics 

First Round Capital Interview of Andy Johns, Wealthfront

bottom of page