All About A/B Testing from the XChange Huddle

Although I work for Semphonic who hosts the XChange conference, I do not get to go to whatever huddles I want. (I should negotiate that in my next review.) So, I was particularly happy to catch the testing huddle hosted by Dylan Lewis at Intuit. Dylan blew me away with his expertise.

Here are some of my take-aways:

1. You need to operationalize testing. This means a dedicated team, with a time-bound process. Intuit uses weekly testing and nothing gets on the home page with out testing it's way up there. Dylan calls the initial pow-wow you need to have to set your testing philosophy "bootcamp". Do this in advance with your analytics team and perhaps a product manager or two.

2. That brings me to the second point which is have a clear hypothesis. You can't test without knowing if the result is good or bad so pick one metric (just one) to optimize for and create a benchmark. You can use an A/A test (see #4 below) to help create the benchmark and then apply statistics to determine what a significant change would be.

3. Know where you want to go with the hyphothesis but don't drag out the test if you get a result you don't like. Dylan says a lot of their tests are flat - no significant change - so just move on. A good process is to have a cue of tests ready to go and if you get the request to continue, simply say sorry, you need the traffic for the next test.

4. The A/A testing the site against itself. The purpose is to knock out any irregularities such as robots executing javascript code. This was mentioned in my earlier post about statistical testing. The A/A test can also help you find the benchmark you will use for your hypothesis. However, if you feel the traffic is pretty clean and you just want to instill trust in your testing culture, you could run an A/A/B test. This is the site against itself plus another instance with the variation to test.

5. The last point I remember hearing from Avinash (who previously worked at Intuit) was echoed by Dylan as well. And that is to make testing fun. Ask people to bet on the outcome. Create prizes or awards. Anything to instill the culture that testing is doable, and ideally, profitable. With any luck this will lead to more senior buy-in and a nice cycle of more budget, testing and resources for your team.