The term “conversion” is often considered something of a holy grail for digital marketing efforts. After all, marketing efforts are often designed to turn passive interest into active engagement (or revenue). CRO is a discipline that attempts to formalize these efforts.
At its best, CRO is about increasing efficiency in your marketing process.
Without some sort of experimentation- or data-informed process, marketing would be very intuition-led, relying on gut feelings and tried-and-true approaches that fail to rise above the competition.
With CRO, marketing processes are fuelled by feedback loops, where the data comes from actual people that show an interest in interacting with the brand’s products, solutions, and marketing channels.
Whether you’re performing behavioral analysis, running A/B tests, or conducting qualitative surveys (don’t worry, we’ll discuss these in the Topics of this Chapter), CRO starts from hypotheses about your marketing efforts and seeks to find the best approach through rigorous and data-driven experimentation.
As a technical marketer, you should be familiar with the technological foundations of these tests. The web and app development processes in your organization need to be sensitive to the requirements of CRO, because data-driven and experimentation-driven approaches will help your developers evaluate their backlog and their work as well.
Consider this…
You’ve built a landing page for your newsletter’s holiday campaign. The newsletter itself is drawing readers nicely to your site, but the landing page is producing underwhelming results.
The landing page is designed to direct users to purchase a discounted product through a call-to-action widget, but the conversion rate for this is extremely poor.
Based on analytics data, you notice that only a handful of users scroll to the call-to-action, and most just load the landing page and then follow links to your main site (where they don’t seem to convert either).
Your hypotheses are that the call-to-action should be more prominently displayed and that the value statement should be clearer.
You design an A/B test with a variation of the call-to-action (in addition to the original), with the purpose of improving the conversion rate of your discount product campaign.
Once the experiment is over, you should have a good idea of whether the new variant produces a higher probability of conversion, whether the original version performs the best, or whether the experiment results were indecisive.
The result of this experiment can then be used to update the landing page appropriately, or to generate an updated hypothesis for a follow-up test.