top of page

Conversion Optimization — Back To Basics with Research and Experimentation.

Updated: Aug 23, 2021

Conversion Optimization carries a lot of misconstrued definitions and this often leads to having the wrong idea about what it involves and the part it plays in experimentation. Often, we run into articles explaining the methods and principles of CRO and Experimentation but more often than not, these articles fall short in giving concise and accurate explanations of what CRO involves and the part conversion research plays in Experimentation.

In this article, I will be explaining the basics of conversion research and the part it plays in A/B testing. Conversion Optimization is a wonderful method of improving the customer journey on websites through research to find sources of friction (examples are: usability problems, bad/unclear copy, bugs, data reporting problems in Analytics) and other factors that lead to poor experience on a website and test hypotheses from the research to validate the theories to see if it will lead to a rise in revenue and profit margins.


When done properly, the results of experimentation should drive the decision making process of the business by removing the need to rely on opinions and controversial takes on what changes should be made.

Conversion Research

Conversion research plays an extremely important role in the process of Experimentation (A/B testing.) It is safe to say that if you mess up your research phase in any way, the resulting hypotheses you want to run experiments on will be deeply flawed. Conversion research when done properly involves implementing processes and steps to better understand the customers accessing your website and see how you can optimize your website to improve the journey.

Through these methods, you start to empathize with the customers and see how certain processes and checks your business runs might be interrupting the flow in the customer journey.

Conversion research should never take the back seat in the experimentation process. Here’s an analogy, if experimentation is the engine that drives data driven decision making, then conversion research is the frame that holds everything together. Conversion research ensures that the decisions being taken address issues that directly affect the customer journey and any change made based on the results of experimentation can lead to positive uplifts in your primary conversions, revenue and profit margins.

When conducting Conversion Research, some CRO practitioners can be led to think of using only data from their digital analytics to come up with test hypotheses. Yet oftentimes, qualitative research can offer more insight than anything else working to come up with winning test hypotheses.

Where quantitative stuff tells you what, where, and how much, qualitative tells you ‘why.’ The goal of qualitative research is to gather an in-depth understanding of user behavior and why they took those specific actions.

Combined, you start to have a clearer picture of what drives specific behaviors on your website and what you can do to optimize your website to reduce or increase those behaviors.

Here is a Conversion Research case study from Walmart

Problem: Walmart Canada was seeing a significant amount of traffic coming from mobile, mainly tablet devices, Walmart recognized that their current solution did not really work on mobile devices. First the overall look and feel of the site on mobile was awful, secondly it took forever to load.

Opportunity: Their prior research gave them plenty of data about which screen size and browsers were used most. So they went to work to make the overall experience faster and responsive to whatever screen size or device it was being run from.

Solution: Walmart used hands-on usability testing on both old and new designs, A/B tested various elements on the site and did overall speed improvements throughout the site. In the end, with their new design Walmart achieved an overall conversion boost of 20% on all devices. On mobile, orders went up by 98% — not bad! (See full case study here.)


Experimentation is debatably the most exciting part of conversion optimization because this is the point where you validate whether or not the hypotheses you came up with are valid or not and if valid, what impact did it have on the revenue or profit margin of the company.

In a world where experimentation is done by over 70% of online businesses, choosing not to follow a data-driven methodology to make informed decisions on website changes will seem unreasonable.

A powerful hypothesis is the key to discovering impactful findings about your online business. If you are not already collecting customer objections via surveys, feedback, usability testing etcetera, you are unlikely to witness any meaningful wins in your testing program. The power level of your hypothesis is directly proportional to your understanding of your online business.

Online, the process for A/B split-testing considers business goals, it weighs risk vs. reward, exploration vs. exploitation, science vs. business. Using experimentation to validate hypotheses is managing the risks to your business.

A/B testing is an invaluable resource to anyone making decisions in an online environment. With a little bit of knowledge and a lot of diligence, you can mitigate many of the risks that most beginning optimizers face. If you are a business leader making decisions from A/B tests, especially if you are not running those tests, you NEED to understand what that data is telling you.

An A/B test cannot tell you categorically what to do — it is simply a statement about the probability of one thing happening. If you don’t understand this and how to use it for decisions you could be making a lot of mistakes.

If you really dig into the information here, you’ll be ahead of 90% of businesses running tests.


Knowledge is a limiting factor that only experience and iterative learning can transcend. Your experimentation culture strongly determines how successful your winning process is.

To simplify a winning process, the structure goes something like this:

  • Research;

  • Prioritization;

  • Experimentation;

  • Analyze, learn, repeat.

Be sure to understand A/B testing statistics and its implications and also how to analyze the result from your tests, you can read more about A/B testing here. I hope you learned something today!

11 views0 comments

Recent Posts

See All


bottom of page