Decision to make (Channel-mix in customer acquisition)

An organization wanted to secure that digital channels were working properly and therefore the investment in each channel was producing the expected outcome. They knew that currently the marketing budget was divided through different acquisition channels Paid/Referral/Organic/Upgrade with the following customer acquisition split 0.72/0.07/0.12/0.09 respectively, and they wanted to rely less on Paid and move more into the other channels.
So, the team starting to pivot resources from Paid channel to the other channels,so new sales and marketing tactics were deployed to secure that there was an effective pivot from Paid tactic to others.
With that as a backdrop, the organisation wanted to secure that this pivot was producing the expected outcome in terms of New Customer Acquisition proportions.

The think is that the team suggested there was an improvement in one key metric, customer acquired through Paid channel, from a 72% to 69%. And the data confirmed that:
Acquisition.by.channel Paid Referral Organic Upgrade
Customers 180 26 25 29
Percentage 69 10 10 11

Sounds familiar to you? Well done, congratulations to the team,let’s move on…And the next month we have to face again the same challenge!. But nobody wants to raise that hot topic again!

Mechanism to make the decision

In this situation, I partnered with the head of marketing and together we challenged the conventional wisdom, so I run an advance statistical technique to prove whether we had enough confidence of that improvement.



There is one specific step-by-step methodology for this kind of challenges, which can be summarised in the following picture. The model has a large statistical component in the design phase, as well as in the way the experiment is run and analyzed. For me, the key think here is to ensure we collect enough data to infer from a sample one specific parameter that you’re applying to a wider population. Technically this is called power analysis, to know the sample size.

During one week we designed the process: what data we needed to test our assumptions, we also analyzed historical data (there was some seasonality), some wrong, missing and unmature data was amended or discarded, eventually we ended up with a proper baseline.

And we started the experiment, collecting the data interfering as less as possible in the process,a and eventually fetched a very simple set of data, just telling us how many new customers were being acquired through each of the different sales channels for a limited period of time.

I did what is known as statistical testing, “multiple sample proportion test”, using a \(\chi 2\) test, where we can compare a set of observed proportions against the expected proportions and decide whether there’s a difference or not. In R you can do that very easily as you can see below:

(res <- chisq.test(observ, p = c(0.72,0.07,0.12,0.09)))
## 
##  Chi-squared test for given probabilities
## 
## data:  observ
## X-squared = 6, df = 3, p-value = 0.1

Because p-value is 0.10, it means it’s not statistically significant, hence we conclude that there is no evidence of a difference in the proportion of customer acquisition amongst the channels compared to the original proportions of 0.72/0.07/0.12/0.09 . In other words, there is no evidence against the initial hypothesis that proportions of acquired customers across the different channels was the same.

We have concluded that variation is attributable with 95% confidence to pure statistical variation.

Action

Let’s recap briefly what we have achieved:

We noted how simple maths were leading us to the wrong conclusions, where initially there were some measurements that indicated a change in proportion of customers acquired through the different channels (from 72% to 69%). However, we proved that they were just statistical variations and therefore not significant, therefore not being attributable to changes in the sales channels tactics.

It’s a well-known cognitive bias that we tend to seek confirmation in what we see and hear of what we believe. To avoid that bias, challenge your own assumptions, follow a more scientific-based thinking process.

The suggestion on continuing measuring this way the change in sales tactics was very well received by the Marketing head, because (s)he always felt that this specific KPI was quite variable over time so having a reliable litmus-test to double-check that changes were moving the needle in the right direction was welcome for all.

Depending on what you need to test (proportion of acquired customers, mean usage after some changes on your product,…), there are several statistical techniques in addition to \(\chi 2\) test, like ANOVA test, t-test,…Even they can be easily implemented in Excel.

Sounds really simple,uh? And powerful, isn’t it?
Do you want to test the effectiveness of different courses of action? Please drop me email