With so many different approaches and methodologies out there, it can be hard to see through the smoke-screen and decipher the silver lining of conducting a/b testing

When used correctly, a/b testing can be a very powerful tool that could, and should, dictate not only whether to use ‘good’ or ‘great’ on a CTA, but also which priorities you, as a company, should make in your product roadmap.🛣️


One of the common a/b testing missteps I frequently see in startups (and some larger companies too), is to base their testing off microscopic changes to the object tested and pray that this small change will hold the key to triple their growth and securing their next round of funding 🔬. Results that are born from random variation (read: no statistical significance) can be the recipe for disaster📉, if they’re believed to be the winning combo and implemented. Fast-forward to when you’ve built the “winning feature”, dedicated time, resources and capital towards it, and you realize that your initial sample size was way too small and that your product or feature is about to fail miserably.


This is, of course, not to say that any seemingly significant findings should be looked upon with skepticism, sometimes your tests yield significance because you’ve hit the nail on the head, in which case, go build!🛠️


🅰️/🅱️ testing as a discipline is interesting and can in many ways be just as unhelpful for your company if done wrong, as it can be helpful when conducted correctly. Plenty of people have analyzed the way NOT to conduct a/b testing, I want to briefly shed light on key questions we’ve had success with asking ourselves internally before conducting experiments in the past:

  1. ✅ Do you have a clearly defined success metric for your test
  2. ✅ Considering your traffic, what is an appropriate length of time to be conducting this test and at which point do you feel that results are sufficiently robust and reliable to potentially be the cornerstones of future product decisions?

If you have answered these questions before conducting your test (and stick to them during the actual testing period), you’ve taken a solid step towards ensuring that your test(s) will produce worthwhile results.


Finally, remember that we’re living in a brave new world of extremely rapid hypothesis-driven validation 💬 that is testable with low effort through interactive prototyping tools like invision, sketch, marvel, balsamiq or even a hand-sketch📝.


After all, it’s better to discredit your product hypothesis for a new shiny feature after a short stint of prototyping and testing, as opposed to after having committed a team of five designers and developers towards building it for one of more full sprints, only to realize that your users or customers have zero interest in what you thought sounded like a cool idea after a (too) long-week in the office (oh yes, this happens more frequently than you’d care to believe!).

--

🏁