[5/5] Testing Fundamentals: Five Mistakes That Kill Tests
Don't waste time on these beginner errors
You’ve learned what A/B testing is, how to set up control vs variation, the 6-step process, and which test types exist.
Now let’s cover the mistakes that make tests worthless. Avoid these and you’ll get better results than 80% of people running experiments.
Mistake 1: Stopping Tests Too Early
This is the biggest error beginners make. You check results after two days, see version B winning, and declare victory. Then you implement it and the lift disappears.
Early results lie constantly. Conversion rates swing wildly in the first few days of any test then stabilize over time.
What to do instead: Predetermine your sample size using a calculator. Run the test until you hit that number. Don’t peek. Don’t make decisions based on trends. Trust the math.
Mistake 2: Testing Multiple Things at Once
You change the headline, the image, the button color, and the form length all in one test. Version B wins. Great! But now you have no idea which change caused the result.
Maybe the headline worked but the button color actually hurt. You’ll never know because you changed too much.
What to do instead: Test one variable at a time. Isolate changes so you know what caused results. It takes longer but you actually learn something.
Mistake 3: Running Tests Without Enough Traffic
You test a page that gets 50 visitors per week. You need thousands of visitors to detect real differences. Your test runs for months and never reaches significance.
Low traffic means you’re guessing with extra steps. The math doesn’t work below certain thresholds.
What to do instead: Calculate required sample size before starting. If you don’t have enough traffic, pick a higher traffic page to test. Or test something with much bigger expected impact so you need fewer visitors to detect the difference.
Mistake 4: Ignoring Statistical Significance
Your testing tool says “not significant” but version B looks better in the graph so you ship it anyway. Congratulations, you just made a decision based on random noise.
Statistical significance exists for a reason. It tells you whether the difference is real or just chance. Ignoring it means you’ll implement changes that don’t actually work.
What to do instead: Wait for 95% confidence minimum. If you don’t hit it after a reasonable timeframe, call the test inconclusive and move on. Not every test produces a clear winner. That’s okay.
Mistake 5: Never Documenting Results
You run 20 tests but have no record of what you learned. Six months later, someone suggests testing the same thing again because nobody remembers.
Or worse, you can’t show hiring managers or stakeholders what you’ve accomplished because you have no documentation.
What to do instead: After every test, write down what you tested, why you tested it, what the results were, and what you learned. Create a simple spreadsheet or use a project management tool. This record becomes your testing knowledge base and your portfolio.
Your Next Step
You now know the fundamentals of A/B testing. You understand control vs variation, the testing process, different test types, and common mistakes.
The next step is simple: run a test.
Pick a high traffic page with a clear conversion goal. Identify one obvious problem from your analytics or session recordings. Create a variation that solves that problem. Set up the test in your tool. Let it run for two full weeks. Document the results.
That first test teaches you more than reading ten articles ever could. You’ll see how long it takes to get results. You’ll learn your tool. You’ll discover how much conversion rates naturally fluctuate.
Then do it again. Each test builds your judgment about what’s worth testing and what matters to your audience.
━━━━━━━━━━━━━━━━━━━━━━
💡 QUICK WIN
Create a testing documentation spreadsheet today.
Columns: Test Name, Hypothesis, Start Date, End Date, Result, Learning, Next Test. Fill it out after every test you run.
━━━━━━━━━━━━━━━━━━━━━━
This concludes the series.
You’ve learned the fundamentals that separate junior from senior experimenters. Now go apply them.
Reply anytime with questions or results from your tests. I read every response and I’m here to help.
– Atticus
P.S. Want to go deeper? Here are three resources worth your time:
Evan Miller’s sample size calculator (bookmark this)
“Trustworthy Online Controlled Experiments” by Kohavi et al (the textbook if you want to get serious)
Now stop reading and go find problems to solve!

