Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
Well done!
You have completed Evaluating Design!
You have completed Evaluating Design!
Preview
The first method of quantitative testing that we’ll be discussing is A/B testing. This method compares two or more versions of a website or app to find which one performs better on a specific conversion metric.
How to know when to run an A/B test and how to get started running one
- Something is off in your analytics data
- Find out why
- Propose a potential solution
New terms:
- Control group -- People who see the original website design. This is used as the baseline for your experiment.
- Experimental group -- People who see the design being tested.
- Multivariate testing -- Technique for testing multiple concepts at the same time.
- Conversion rate -- Rate at which visitors to your site get to a particular goal that you have chosen (ex. making a purchase, signing up for a service, or subscribing to a newsletter)
Further Reading
Related Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign upRelated Discussions
Have questions about this video? Start a discussion with the community and Treehouse staff.
Sign up
[MUSIC]
0:00
Usability testing is
a great form of quality
0:04
research designed to get to
the root of behaviors, so
0:07
you can anticipate how people
would use your product and why.
0:10
However, if you want to get into the data
representing how people are using your
0:14
product today, that's where
quantitative testing comes in.
0:19
To kick off our quantitative
method deep dive,
0:23
let's discuss A/B testing
starting with an example.
0:26
Ever notice that your Facebook feed
has a different design than before?
0:30
But when you mention it to your
other friends on Facebook,
0:34
they don't know what you're talking about.
0:37
That's likely because you're part of
an A/B test that Facebook is running.
0:40
They are evaluating if
the new design you're seeing
0:44
performs better than the original.
0:47
They may want to know if you'd
spend more time on Facebook or
0:50
if you're more likely to click on
an ad when using this new feed design.
0:53
During an A/B test, one group of visitors
to your website sees the specific version
0:58
of it,
while another group sees another version.
1:02
At the end of the study, you will need to
determine which of the website versions
1:06
performed better based
on a hypothesis such as,
1:11
version B will receive more
ad clicks than version A.
1:14
So how do you know when to run an A/B
test, and how to get started running one?
1:18
First, you may notice something
is off in your analytics data.
1:24
Perhaps visitors are abandoning your
site or app before reaching their goals.
1:29
In our Amazon example,
1:34
this was when people were adding air
conditioners to their shopping carts but
1:35
a high number of them were
leaving before making a purchase.
1:40
Second, find out why.
1:44
One of the ways to do this is
something we've already discussed,
1:47
a usability study.
1:51
This may lead you to discover what
is throwing the users off track.
1:53
Perhaps it's the need for
more product information or
1:57
maybe the checkout button is hard to find.
1:59
Third, propose a potential solution.
2:02
What can address the problem
you've identified?
2:05
Perhaps it's adding more
product information or
2:08
maybe it's moving the checkout button.
2:10
Propose one design solution that you
can test against the current site and
2:13
an A/B experiment.
2:17
If you follow the steps we just discussed,
2:19
you may be ready to start
running an A/B test.
2:22
Let's take a look at
the technical steps involved and
2:26
how to create a test that can produce
statistically significant results.
2:29
When we talked about testing the current
site versus a potential design solution,
2:34
this was in reference to
the control versus experiment.
2:39
The control group is a group of people
who will see the original website design.
2:43
It is used as the baseline for
your experiment.
2:48
The new design should be shown
to the experimental group
2:52
to see if it outperforms the control.
2:55
If you want to have additional
experimental groups, that would be
2:58
referred to as multivariate testing, but
we won't be going into that in this class.
3:02
For a website or app that receives few
visitors, you should plan on assigning
3:07
roughly half the visitors into
the control and half into the experiment.
3:12
If you have many visitors
like in the Facebook example,
3:17
you may need to sign as little as 1% or
even less.
3:21
Simply because even at 1%,
3:25
you will quickly have enough volume
to draw meaningful conclusions.
3:27
The design alternatives that you're
testing must be limited to contain
3:32
specific isolated changes.
3:36
Examples include,
the color of the critical button,
3:39
placement of a specific element, text
variations, or perhaps a header image.
3:43
This is important so at the end of
the study you can say, with confidence,
3:51
what exactly caused
the difference in performance.
3:56
For example, was it the color,
the placement, the text, or the image.
4:00
Even small, iterative changes matter,
and can add up to large
4:06
changes in the total revenue or
lead generation from your product.
4:10
A/B testing will compare two versions
of a product in order to find which
4:15
one is better at achieving
a higher conversion rate.
4:20
What is a conversion rate?
4:25
It is the rate at which visitors to your
site get to a particular goal that you
4:27
have chosen.
4:31
Examples of this include,
making a purchase, signing up for
4:33
a service, or subscribing to a newsletter.
4:38
How long should one of
these studies run for?
4:43
Unfortunately, there's no single answer
since much of it will depend on how
4:47
much traffic your site gets.
4:52
Start with how big you want
your sample size to be, and
4:54
stop your study when
you've reached that size.
4:57
The sample size can be determined by
the size of the conversion improvement
5:00
you're looking for and the confidence
you'd like to have in that answer.
5:05
I've provided a link to a guideline for
5:09
helping you determine the sample
size in the teacher's notes.
5:11
If your site traffic or user behavior
varies according to the day of the week,
5:15
extend your test to about two weeks.
5:20
That way, you test each day of
the week an equal amount of time.
5:23
It should also help account for
5:26
unique spikes due to things
like a newsletter coming out.
5:28
A holiday, the consumer's payday, or
people just looking around your site, but
5:32
coming back much later to
complete the actual purchase.
5:37
Once you've understood
the basics behind A/B testing,
5:41
implementation should be the easy part.
5:45
There are many tools
out there to help you.
5:48
Just make sure you know
what you want to test and
5:50
what success metrics
you're going to compare.
5:53
Purchases, sign ups,
subscriptions, or something else.
5:56
You need to sign up for Treehouse in order to download course files.
Sign upYou need to sign up for Treehouse in order to set up Workspace
Sign up