Machine Learning for Automated A/B Testing

Steve Hanov posted this interesting article about automated A/B testing. Normally in A/B testing, you create two (or more) variations of something — whether that be button color, placement, text, or even two completely different versions of a page (though it’s usually done for smaller/subtler variations than that) — and record how well each version performs 1.  After letting the test run for a while, you look at your metrics and determine which version was best, and you update your code to use that on the live site.

Well, Steve took that idea and added some machine learning to it. The article describes the details, but the basic concept is that as your site is collecting data, it starts showing the “best” version to customers more often, so there is no code updating necessary. It provides for a nearly-instantaneous feedback loop, and no coding changes are required to switch to the “best” version. I imagine that at some point you might want to get rid of the A/B component (otherwise some percent of your users will be seeing a non-standard version of the site), but I can see an argument being made to never get rid of it, too.

I like this methodology, and I believe we will be trying it out some at work.


Notes:

  1. “Performs” can have different meanings, depending on what you’re looking for.  It can mean how often the link gets clicked, how many sales result from each variation, how long people look at a page, or whatever other metric you choose to use.
Category(s): Web Design
Tags: ,

Leave a Reply

Your email address will not be published. Required fields are marked *