Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
Well done!
You have completed Evaluating Design!
You have completed Evaluating Design!
Preview
An important part of A/B testing is the analysis that follows the study. In this video we will go over how to formulate that analysis.
VWO A/B Testing Calculator
Additional considerations when reviewing the study outcome
- Are you using multiple sources of data?
- Did your design solution not improve the outcome?
- Did you study the user segments?
Further Reading
Real-life examples
- FaceBook Feed - Ever notice that your FB feed looks different, but when you mention it to your friends, they donโt know what youโre talking about? Thatโs likely because youโre part of an experiment in which Facebook is running. Letโs imagine you see a new Live Feed design. What do you think is a possible metric that FaceBook is evaluating? It could be time on site, the click-through rate on their ads or a friendโs post, etc.
- Shopping site checkout flow - Letโs say that Amazon wants to know if a slightly darker shade of orange on their Checkout button will increase conversions. Given their size, they might allot just 1% of their traffic to use the new color and after a few days, theyโll have a large enough sample to know if this change is worthwhile.
Related Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign upRelated Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign up
An important part of AB testing is
the analysis that follows the study.
0:00
Let's say that Facebook finds that
the experimental group using the new
0:05
feed design clicked in ads at a 0.5%
higher rate than the control group.
0:10
Does that mean that the new
design is performing better?
0:15
Maybe.
0:19
The question we should be asking is,
what is the probability that a difference
0:20
between the control and
experimental groups with simply by chance?
0:26
To answer this question,
you must find the p value.
0:31
The p value is a number that determines
the significance of your result.
0:34
One way to find the p-value
is to use one of
0:40
the many A/B testing
calculators available online.
0:43
I will use the one provided
by VWO as an example.
0:47
You can find a link to this calculator
in the notes for this video.
0:51
Let's scroll down to get started.
0:55
You will need to enter the number
of people who saw each variation.
0:59
As an example, let's say that
500 people saw the control or
1:02
the original version of a website.
1:08
Another 500 saw the experiment or
variation.
1:11
40 of the people who saw the control
ultimately converted, let's say,
1:17
by making a purchase for example.
1:22
However, 60 of those who saw
the experiment made a purchase as well.
1:24
Is that significant?
1:31
Let's find out.
1:32
Calculate, scroll down, yes!
1:34
This is significant.
1:38
The P-value here is 0.017.
1:40
Another way to look at that is
by subtracting that from 1.
1:44
1 minus 0.017
1:49
is 0.983.
1:54
That means there is a 98.3%
chance that the experiment does
1:58
in fact produce more
purchases than the control.
2:03
Anything higher than 95% is
usually considered significant.
2:07
In addition to understanding the p value,
here are a few more points to consider
2:12
when you're viewing
the outcome of your study.
2:17
Are you using multiple sources of data?
2:20
It's easy to make a mistake
in how you record data.
2:23
Having two places to
compare that data will make
2:26
it possible to catch those mistakes.
2:29
Since Google Analytics is
a common tool for web analytics,
2:32
you may want to just use
that as one of the sources.
2:36
Did your design solution
not improve the outcome?
2:39
Well, let's say you added product
information for the A/C units.
2:43
But that didn't change the rate at which
the consumers left the site without
2:47
completing a purchase.
2:51
Maybe that was because they didn't notice
the product information they needed.
2:53
Or maybe they didn't
understand the language used.
2:58
If the qualitative research showed that
people needed more information but
3:01
your design did not fix that problem,
then just test another design idea.
3:06
Did you study the user segments?
3:12
Even when the average difference
between the experimental and
3:14
controlled group is small, you can
discover significant improvements if you
3:18
segment your results by user groups.
3:23
Perhaps desktop users read the new product
information with greater ease than
3:25
mobile users.
3:30
And so you find that they completed
a purchase at a 10% higher rate than
3:31
mobile users.
3:35
That would be a big win.
3:36
There are many other ways
to segment your data,
3:39
depending on what's
important to your product.
3:41
New versus returning, men versus women,
various browser types, you name it.
3:44
That's it for our overview of AB testing.
3:51
There is much more to learn on this topic,
so take a look at the notes for
3:55
the section to find additional resources.
3:59
You need to sign up for Treehouse in order to download course files.
Sign upYou need to sign up for Treehouse in order to set up Workspace
Sign up