Over the past couple of days, I’ve had the opportunity to attend Opticon 2015, a huge conference for the experience optimization and testing community. Hosted by Optimizely, the event was a gold mine of information on A/B testing, personalization, and website experimentation.
This mini-series will contain summaries of some of the most helpful breakout sessions I attended. As data becomes easier to gather and all-the-more important in company decision making, product managers should constantly be aware of the latest and greatest in how best to test and make data-backed decisions.
In this post, I’ll be summarizing the main points from Esben Rasmussen and Jeppe Dyrby’s excellent “10 Secrets to Building an Amazing Mobile Testing Roadmap session”. Esben and Jeppe are both engineers on a dedicated A/B testing team at eBay in Denmark.
1. Have goals for A/B testing
For Esben and Jeppe’s team, the main goals are to make a better product, increase revenues, and get a higher customer satisfaction score. Goals may vary across different companies – the key here is to have good reasons for testing, because these reasons will end up providing the direction on all the tests that the team decides to run.
2. Get a dedicated team
eBay’s experimentation team was nicknamed the “Tenzing team,” named after the Sherpa Tenzing Norgay, who accompanied Sir Edmund Hilary to become one of the first mountaineers to reach the top of Mount Everest. A dedicated experimentation team may not be possible for some companies (size of company, financial resources, etc.), but having one will do wonders on putting a priority on A/B testing without taking resources away from ongoing projects or distracting other project teams from delivering their products.
3. Motivate your team
Set financial goals, then stick to those goals throughout the testing process. At the same time, make the team environment a fun one. Esben and Jeppe set “fun-dex” goals to make sure that people aren’t forgetting to enjoy doing what they’re doing.
4. Define your testing roadmap
Whatever you do, don’t put together your roadmap on a laptop – Esben and Jeppe believe that this kills creativity. At this stage of the process, just throw out ideas and don’t worry about vetting them just yet. Divide these ideas into specific areas, then figure out the effort size and expected revenue size on a spreadsheet. With that information it becomes easier to visually show the effort vs. user value vs. expected revenue on a plotted diagram.
5. Have a test strategy
There’s the small stuff, such as the positions, sizes, and colors of buttons, and there’s the heavy stuff, such as the features and flows of a web page. Make sure the strategy makes sense for the specific channel. Esben and Jeppe found that the desktop experience can have more thorough tests, with more variations and rearranged flows as opposed to testing in the mobile experience.
6. Visualize your workflow
The most common is the scrum board, but their team doesn’t use one, instead opting for a “bubble” board that’s organized into themes. These themes move through the build-measure-learn cycles and have hypotheses attached to them. The key here is to see all the tests the team is planning to run and figure out where that test is at a given point in the cycle.
7. Implement your test
Esben and Jeppe have found that variations to be tested (in an A/B test) have the tendency to increase during implementation, so they recommend limiting to a maximum of three variations to avoid taking too long to get results. Consider not doing automated regression tests since these are short, temporary code changes (but do test manually to make sure the experiment will work).
8. Measure the outcome
This is where the initial goals come into play. The eBay team measures the expected revenue and the NPS (customer satisfaction index). Sometimes you won’t get valid results, which means a re-test may be necessary.
The clearest lessons are in the numbers – if revenue increased for a particular variation, then we know that should be a priority in implementing. If the original variation was good, then the test validated that you had already made the right decision in the original implementation. If there is no conclusion, then either users didn’t care about that change or the experiment was measured the wrong way.
10. Make the decision
Implement the change for the winning variations. Esben and Jeppe’s team own these changes, as opposed to the original project teams. However, if the effort is really large, then they would enlist the help of other teams. As mentioned earlier, the advantage of having a dedicated experimentation team implement the changes is that it allows the other project teams to focus on their work and goals.
11. Turn it up to eleven!
Give your team a mandate to fail – failure is important in A/B testing, because you learn what doesn’t work and what works. Don’t forget to set the team goal and follow it throughout the testing cycle!
Interesting in learning more about mobile testing? Chat with other product leaders around the world in our PMHQ Community!