Posted in A/B Split Testing, How To on September 27th, 2010

The statistics of A/B testing results can be confusing unless you know exact formulas. Earlier, we had published articles related to mathematics of A/B testing and also have a free A/B testing calculator on the site to see if your results are significant or not. The articles provides an introduction and calculator simply provides an interface; the real formulas used for calculate statistical significance of split testing results are still missing.

**Excel sheet with A/B testing formulas**

So, we have come up with a FREE spreadsheet which details how exactly the significance is calculated. You just need to provide e thnumber of visitors and conversions for control and variations. The spreadsheet will automatically calculate for you significance, p-value, z-value and other relevant metrics for any kind of split testing (including Adwords). Of course, you can see the relevant formulas in the spreadsheet. Click the screenshot below to download the calculator (spreadsheet):

Click here to download A/B testing significance calculator (excel sheet)

Please feel free to share the file with your friends and colleagues or post it on your blog / twitter.

PS: By the way, if you want to do quick calculations, we have a version of this calculator hosted on Google Docs (please make a copy of the Google Doc sheet into your own account before you make any changes to it).

**Update:** Thanks to Jai (in the comments below), we had noticed a minor error in conversion rate range calculations (though significance results were unaffected). The error in fixed in the latest version of spreadsheeet.

### More optimization awesomeness

### Paras Chopra

CEO and Founder of Wingify by the day, startups, marketing and analytics enthusiast by the afternoon, and a nihilist philosopher/writer by the evening!

# The Complete A/B Testing Guide

Know all that you need to get started:

- What is A/B Testing?
- Is it compatible with SEO?
- How to start your first A/B test?

##### Tags

### I ♥ Split Testing Blog

### Stay up to date

### Recent Posts

### Write for this blog!

We accept high quality articles related to A/B testing and online marketing. Send your proposals to contribute@wingify.com

## 42 Comments

PortmanSeptember 28, 2010

Aren’t all non-converting visitors a mistrial?

i.e., if I multiple the number of visitors by 10x, but keep the conversions the same, the statistical significance of the results should not change.

See http://blog.asmartbear.com/easy-statistics-for-adwords-ab-testing-and-hamsters.html

Paras ChopraSeptember 28, 2010

@Portman: No, the number of visitors in the test influence the standard deviation and hence the significance. Suppose you have 10 visitors and 2 conversions v/s 1000 visitors and 200 conversions. You have a much better idea of conversion rate in the latter than the former.

links for 2010-09-30 « Köszönjük, Emese!September 30, 2010

[...] A/B testing significance calculator (spreadsheet in Excel) « I love split testing – Visual Websit… You just need to provide number of visitors and conversions for control and variations. The spreadsheet will automatically calculate for you significance, p-value, z-value and other relevant metrics for any kind of split testing (including Adwords). (tags: a/b testing metrics calculator spreadsheet free tools abtesting conversion) [...]

What is a trust seal actually worth? « The Ecommerce BlogOctober 29, 2010

[...] a site seal, I strongly recommend performing an A-B test for a few months, or at least until some statistical significance is reached, to see if it will be worth spending the money on the seal again. Also make sure you are [...]

Benjamin DagerothOctober 29, 2010

http://www.cliffsnotes.com/study_guide/Point-Estimates-and-Confidence-Intervals.topicArticleId-25951,articleId-25932.html

When you are using the values 1.65 and 1.96 to calculate significance isn’t that the niveau for 90% and 95 % respectively? At least, that’s what I take from the other website.

Paras ChopraOctober 29, 2010

@Benjamin: you will notice that it is +/- 1.65 * SE so that covers the full 95% of area of normal curve.

DennisNovember 9, 2010

Fail. Your Conversion rate limits overlap at the 95% level but you say that they are significant. This is inconsistant.

Paras ChopraNovember 9, 2010

@Dennis: not sure if I got your point. Can you elaborate?

DennisNovember 10, 2010

Sure, in your spreadsheet your 95% conversion rate limit for the control is between 5.68% and 7.62% while the conversion rate for the variation is between 4.81% and 6.89%. These two ranges overlap and thus you have failed to find a significant difference as the conversion rate may be 6% for the control and 6% for the variation.

However you have listed in another box that your conversion rate at 95% confidence is significant.

This result contradicts your 95& conversion rate limits results.

JoeNovember 17, 2010

Could you please respond to the last comment posted by Dennis? It does seem your worksheet contradicts itself. I would like to use it, but I want to make sure it is accurate.

Paras ChopraNovember 17, 2010

@Joe and @Dennis: actually, 95% range of conversion rate is different from being significant at 95% confidence level. If you visualize conversion rate ranges as a normal curves, then the overlap in 95% range constitutes a tiny area and that’s why the resultant z-value becomes significant at 95% confidence level.

I hope I am clear. If not, let me know. Will try to clarify.

JoeNovember 17, 2010

Isn’t it because you are using 1.65 instead of 1.96? If you are doing a two-tailed test, 1.65 only gives you a 90% range. 1.96 is required for a 95% range, again on a two-tailed view. If you define it as checking if variation is better than control (pvariation-pcontrol<=0), then you could use a one tail range maybe. But it seems your calculator is just trying to show if they are different (i.e., you care if either one is larger than the other).

Paras ChopraNovember 17, 2010

@Joe: Yes, you are right. It isn’t a one-tailed test. It depends on how you are interpreting the result but I am glad you clarified.

BartekJanuary 10, 2011

For the online version of the calculator, you set the minimum N as being 15. Does n=15 have an special relevance?

Paras ChopraJanuary 11, 2011

@Bartek: which N are you talking about?

DaveFebruary 23, 2011

Could I use this tool for evaluating responses to a survey?

E.g. 1000 respondents, 600 are satisfied, 400 are not satisfied. is the difference statistically significant?

How many hits to your landing page do you need to start A/B testing it? - QuoraApril 2, 2011

[...] User There's a handy spreadsheet with statistical significance formulas on http://visualwebsiteoptimizer.co…And like Michelle Wyatt said, you can (and should) start testing with any amount of traffic, it may [...]

Appsumo reveals its A/B testing secret: only 1 out of 8 tests produce resultsMay 13, 2011

[...] Word of caution. Be aware of premature e-finalization. Don’t end tests before data is finalized (aka statistically significant). [...]

A/B Testing Ad Text for Better PPC Results : Amadeus ConsultingJune 1, 2011

[...] Paras Chopra from Visual Website Optimizer has some helpful tips on figuring out when you have reached “Statistical confidence.” [...]

TWJune 13, 2011

What’s the best way to measure statistical significance of revenue improvements. I have my split test feeding data into Analytics but I’m interested in knowing at what point my Per Visit Value (which may not correlate well with raw conversions) becomes statistically relevant. Is their a way of calculating this? To me, the answer isn’t at what point the number of conversions becomes statistically relevant it’s at what point the £ or $ becomes relevant.

Paras ChopraJune 13, 2011

@Tim: mathematically, the basis for calculating significance on revenue improvement is similar. You simply need to input the mean and standard deviation of revenue and rest of math remains the same. We already do it for revenue tracking feature in VWO: http://visualwebsiteoptimizer.com/split-testing-blog/revenue-tracking-for-ab-testing/

TWJune 13, 2011

That looks great and would def. give me a reason to use VWO next time. I have a lot of data at the moment in GWO / Analytics for this test that we’ve run so in this particular instance I’ll need to find a way of calculating that significance with the data I’ve got.

What you have to know about conversion optimization - ConversionXLNovember 4, 2011

[...] it takes for you to know which one is the best. You need statistical significance. There’s a significance calculator spreadsheet in Excel you can [...]

What you have to know about conversion optimization | Traindom BlogNovember 6, 2011

[...] it takes for you to know which one is the best. You need statistical significance. There’s a significance calculator spreadsheet in Excel you can [...]

EricDecember 16, 2011

The significance level of the test is not determined by the p-value, nor is it the probability that the null hypothesis is true.

One rejects the null hypothesis when the p-value is less than the significance level alpha, which is often 0.05

The p-value is based on the assumption that a result is the product of chance alone, it therefore cannot also be used to gauge the probability of that assumption being true.

The significance level of a test is a value that should be decided upon by the person interpreting the data before the data are viewed,and is compared against the p-value or any other statistic calculated after the test has been performed.

The real meaning is that the p-value is the chance of obtaining such results if the null hypothesis is true.

A/B spilttest giver ikke et brugbart resultatDecember 31, 2011

[...] A/B testing significance calculator (spreadsheet in Excel) Tweet This entry was posted in Uncategorized. Bookmark the permalink. ← Skal kundens navn med i emnelinjen på dit nyhedsbrev? [...]

A/B-testing av annonserJanuary 12, 2012

[...] lite datagrunnlag – bruk denne enkle A/B-testing kalkulatoren for å sikre [...]

idnAJanuary 19, 2012

Can anybody tell me how to derivate the formula of the Z-Score? I need this formula for my thesis, so it would be good if I could explain the correctness of this formula with mathematical literature. Does anybody now books or any other scientifical papers that describe this forumla?

Thank you very much!

5 Minutes To A Bigger Email Audience | | IntoxicativeIntoxicativeMay 3, 2012

[...] Run your test long enough to get significant results (“2 out of 3″ is not exactly scientific proof; calculate significance with this). [...]

JaiJuly 17, 2012

Hey Paras. First off, the information you share is awesome, and I love your service.

I’m trying to get my head around all this and I have a few questions:

1) In this spreadsheet, when calculating the 95% Conversion Rate Limits, you multiply your SE by 1.65. However, in your blog entry “What you really need to know about mathematics of A/B split testing” you say that you need to multiply by 1.96 for calculating the 95% range – What am I missing?

2) I’m curious how this multiplier is calculated, if it’s too complex to explain here, how can I learn?

3) In your blog post “What you really need to know about mathematics of A/B split testing” you suggest that you can use a lack of overlap between the conversion rate limits to show one variation is better than the other. In this spreadsheet however you are using the p-value. Does it really matter which is used?

Thanks!

WingifyJuly 17, 2012

@Jai. Thanks for your comments:

1) Actually, you spotted an error in the spreadsheet. Thanks for commenting it here. The 95% conversion rate range is actually 90% conversion rate range, and you are right 1.96 corresponds to 95% conversion rate range, not 99%. Thanks to you, we noticed this minor error in conversion rate range calculations (though significance results were unaffected as we directly calculate it from p-value, not conversion rate range). The error in fixed in the latest version of spreadsheeet.

2) It will be difficult to explain the calculator here, but if you want to learn there are some excellent resources on the Internet. You should search for z-test or hypothesis testing of binomial variables.

3) P-value in a way measures the overlap between the distributions. Smaller the p-value, smaller the overlap.

JaiJuly 18, 2012

Thanks for getting back to me so fast, and thanks for your comments, things make a lot more sense now :).

AyJuly 20, 2012

Hey, I think there’s also a mistake in the formula for the 90% confidence its SI(OU(p_value0,1); “YES”; “NO”)and I think it should be SI(OU(p_value0,9); “YES”; “NO”)

WingifyJuly 20, 2012

@Ay: yes, you are correct. We’ve fixed it.

How to Run A/B Tests That Give Your Business Big WinsDecember 12, 2012

[...] Website Optimizer has built an Excel sheet that does all the fancy math for you,you can download it here. Just plug in the results from your split test and it’ll tell you if you’ve hit [...]

Koen van HeesSeptember 30, 2013

Thanks for this fantastic tool!

DannieOctober 7, 2013

How would you use this calculator when looking at a test where revenue, not conversion rate is the determining factor of success?

Specifically – if I was conducting and email messaging test where the determining factor of a success treatment is more revenue (not clicks or sales, but overall revenue). For example – perhaps the treatment generates more clicks, but a lower conversion rate – but overall more revenue. Can I still adapt this calculator to calculate significance?

Sudhakar KalluriNovember 8, 2013

Hi Paras,

I looked at the Google Doc version of the spreadsheet, and I think all the [2-sided] conversion rate limits (for example: 5.78% to 7.62% for Control at 90% confidence) are correct, and so are the Z-score and P-value, except that the dimensionless Z-score should be reported as 1.72 (or -1.72 if doing “Variation – Control”) and not as 172.167.

However, the formulas for significance (rows 14-16) are incorrect (assuming you are testing for zero versus non-zero difference in rates from control to variation). For example, for 90% confidence, it should be =IF(OR(p_value0.95),”YES”,”NO”) rather than =IF(OR(p_value0.9),”YES”,”NO”).

Thus for a confidence (100*c)%, the formula should be =IF(OR(p_value( 1 – ((1-c)/2) ),”YES”,”NO”).

With this correction, significances will be row14: YES at 90%, row15: NO at 95%, row16: NO at 99%. Which matches the fact that the Z-score of 1.72 is greater than the 90% cut-off of 1.65, but less than the 95% cut-off of 1.96 & of course less than the 99% cut-off of 2.58.

– Sudhakar

Sudhakar KalluriNovember 8, 2013

Sorry – my earlier comments appear to be truncated.

The correct significance formula at (100*c)%:

YES if p_value(1 – ((1-c)/2)).

– Sudhakar

SanjayDecember 20, 2013

The p-value mentioned here (http://visualwebsiteoptimizer.com/ab-split-significance-calculator/) doesnt actually seem to the be the p-value. It seems to be calculating confidence.

For example, in the excel image above the follow values in control (visitors: 2000, conversion: 134), and variation (visitors: 3000, conversion: 165), the p-value is 0.95.

The same values when used in the calculator give a value of 0.043. It therefore needs to referred to as confidence instead of p-value

Paras ChopraJanuary 2, 2014

No, both values are p values, but in the excel file in C18 =(control_p-variation_p) and in the calculator it is (variation_p – control_p).

So in the Excel file, the p value for the given values is 0.957435466 ~ 95.7% and in JS it is 1 – 0.957435466 ~ 0.043 ~ 4.3%

So both is right, though it seems as if the order control_p-variation_p is more common.

Paras ChopraJanuary 2, 2014

@Dannie: you use the same formulae, but plugin in real average and standard deviation that you get from revenue figures. For conversion rate, we calculate it using the formula p*(1-p)

## Leave a comment

RSS feed for comments on this post. TrackBack URL