3 Things You May Forget Before Running A/B Tests

“Hey! The optimization tool just showed that the new version of our website resulted in a higher conversion rate than the original version, with a 95% confidence! The number proves that I was right about coming up with the new design!” The A/B test boss-in-charge at your company declares, with 200% excitement. Of course, after all the time and effort being deployed into setting up and running the test, who wouldn’t be excited to see that the stars finally aligned? Or, should the A/B test-in-charge-boss curb her enthusiasm and thoroughly review the setup of the test before jumping into conclusion?

surprised-1184889_1280

Did You Lay The Foundation Correctly? 

There is a lot more to A/B testing than just “test,” and without laying the proper foundation by thorough preparation, the insights you gain from your test will be limited, and even incorrect. To avoid wasting business resources on invalid tests that can lead to wrong decisions, make sure to do your homework by conducting the following:

1. Do your research

The first step is to collect quantitative data (i.e. by analyzing metrics on Google Analytics) and/or qualitative data (i.e. by talking to your customers) to identify which design element might be improved upon. Findings from your research will give you a sound understanding of how a particular design element relates to user behavior, help you to brainstorm design variations, and identify what stage of the customer’s buying journey the design variation will have an impact on.

meeting-1184892_1280

lead-capture-block_image

Free Guide

The ABCs of CRM

A Beginner's Guide to Understanding &
Using a CRM

download now

Download Free

All fields are required.

Please Enter Valid Email

2. Identify your goal and objective

What do you want to optimize? Conversion rate? Click through rate? With a long list of possible metrics you may track, you need to identify the critical one that aligns with your goal. For example, if your want to increase the number of blog subscribers, you may set up the goal as the conversion rate resulting in the different variations of the “sign up” button (i.e. #of blog subscribers landed/# of visitors) . In addition, you may want to collaborate with other team members to determine how much of an improvement in the metric-in-question is needed in order to declare the test version the “winner”. For example, is a 10% increase in conversion rate enough for you to go all-in with the test version? Last but not least,  never lose sight on your bottom line, especially if your site is selling a product or service. For instance, if  the test version increases users’ average time on site, but the revenue gained through the customers landed on your test page is significantly less than the revenue gained from customers landed on your original control page, the wise choice is to keep the original version.

businessmen-1039900_1280

3. Form your hypothesis 

After doing your research and identifying what you want to optimize, it’s time to form a sound hypothesis that will allow the test to run effectively. Essentially, a hypothesis is “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.” In this case, your “limited evidence” is the insights you gather during your quant/qual research, and what you want to “further investigate” is the effect of variation(s) on your website has on the goal you identify at step 2. Here’s an example of  the null and alternative hypothesis:

•    The null hypothesis: The difference in conversion rate observed between Version A and Version B is due to random chance. In other words, the null hypothesis states that the outcomes produced by the test version vs. the control version are essentially the same.

•    The alternative hypothesis: Visitors who land on Version A has a higher conversion rate than visitors who land on Version B.

•    A note on the null and alternative hypothesis: The above examples are very bare-bones. Ideally, you would want to include what you do differently for the test version as part of the hypothesis to ensure that the insights you generate from your research can be properly tested. For example, you may expand on “Version A” by stating that “Version A has a call-to-action that uses fear-of-missing-out as a motivator.” During the testing stage, you gather data on the outcomes produced by Version A and B, which will result in two outcomes: a) you reject the null hypothesis and accept the alternative hypothesis (which will propel us to implement Version A on the site), or b) fail to reject the null hypothesis (which will propel us to keep Version B and explore other possibilities of variation). As you can see, forming the null and alternative hypothesis is an integral part of the iteration process in idea generation. In addition, it is important to think of the null hypothesis as “innocent until proven guilty,” in which we only accept the alternative hypothesis when we can be relatively confident the probability of seeing the difference in effect we obtain from Version A vs. B (i.e. conversion rate obtained from Version A-conversion rate obtained from Version B) due to random chance is small. Also note that you can never prove or disprove a hypothesis based on statistical significance. Because we only test our hypothesis on a sample rather than the entire population (i.e. all prospective website visitors), we can never declare with 100% certainty that A is better than B. Keep in mind that there can be false negatives or false positives due to the many nuances that come with hypothesis testing and statistical significance.

•    The danger of not forming a hypothesis prior gathering data/running the test: You can lose sight with what to test due to the number of possible variations you can test. Forming a hypothesis allows you to focus on one assumption at a time, and may help you to refine your research process to get better insights. More importantly, you can be easily biased by final test results without a thorough understanding of why one version is better than the other, and will not be able to gain insights that can be replicated to other parts of your website and/or product.

 

The bottom line? Don’t skip the groundwork and put your “marketing scientist” hat on when you are running website and/or product optimization tests. Approach testing with the scientific approach: a.) research and brainstorm variations that are informed by your research, b.) identify measurable and practical goals, and c.) form testable hypothesis. The initial investment might be higher, but with a solid foundation, you will be able to gain valuable insights on user behavior, iterate with confidence, optimize your website with an aligned and clear goal, which will ultimately benefit your company’s bottom line.