[2/5] Testing Fundamentals: Control vs Variation Explained
The two-version setup that powers every test
Control vs Variation: The Split Test Basics
⏱️ 2-min read
Every A/B test starts with a simple decision: what stays the same and what changes?
That decision determines everything about your test. Get it wrong and you waste time and traffic learning nothing.
What Control Means
The control is your original. It’s what exists right now. It’s the baseline you’re trying to beat.
Your control might be a landing page that’s been live for two years. Or an email template you’ve used fifty times. Or a checkout flow that currently converts at 8%.
Treat your control with respect. It represents all the decisions made before this test. Sometimes those assumptions are right. Sometimes they’re outdated. Sometimes they were never validated in the first place.
What Variation Means
The variation is your new idea. It’s the challenger. It’s what you think might work better.
Good variations come from specific insights. Bad variations come from blog posts about best practices.
Weak example: “Changing the button from blue to orange will increase clicks because orange is more visible.”
Strong example: “Users aren’t clicking the call to action button because session recordings show they scroll past it without stopping. Moving the button higher on the page will increase clicks because users will encounter it before deciding to leave.”
See the difference? The strong version connects an observation to a proposed solution to an expected outcome.
The 50/50 Split Rule
Most tests send 50% of traffic to control and 50% to variation. This is the cleanest approach. It gives you the fastest path to a reliable answer.
Your testing tool handles the splitting automatically. It randomly assigns each visitor to either version. Random assignment matters because it prevents bias.
Without random assignment, you introduce unfairness. Imagine sending all mobile traffic to the variation and all desktop traffic to the control. If mobile converts worse for reasons unrelated to your test, your variation will look like it failed even if your change was good.
Change One Thing Only
In simple testing, your variation should change one thing. Not two things. Not five things. One thing.
Why? Because if you change the headline, the image, and the button color all at once, and your conversion rate goes up, you have no idea which change caused it.
Maybe the headline worked and the button color actually hurt. You’ll never know.
What Counts as a Conversion
Before you start any test, define exactly what success looks like. This is your conversion goal.
For a landing page, success might be email signups. For a pricing page, success might be clicking “Start Free Trial.” For a checkout page, success is completed purchases.
Pick one primary goal per test. Your tool tracks conversions by watching for specific events like landing on a thank you page or clicking a specific button.
Set up your conversion tracking before you launch the test. Verify it works. Many tests fail not because the variation was bad but because conversion tracking broke and nobody noticed.
━━━━━━━━━━━━━━━━━━━━━━
💡 QUICK WIN
Before your next test, write down: “IF I change [specific element] THEN expect [outcome] BECASUE [observation]”
That’s your hypothesis. It forces clear thinking.
━━━━━━━━━━━━━━━━━━━━━━
Coming up in Part 3:
The 6-step process for running any test from start to finish.
Reply with questions anytime.
– Atticus
P.S. The biggest beginner mistake is changing multiple things at once on an important page hoping for a higher conversion. You’ll be tempted to do it. Don’t. Isolate variables and you’ll actually learn something.

