I spent the afternoon attending the Behavioral Targeting and Testing track, where Jim Novo of Drilling Down delivered an excellent presentation on “Actionable testing and reporting even a manager could love.” Jim’s proposal is to develop your reporting efforts such that you are reporting on people – not campaigns. Many times, there are multiple factors influencing the customer’s behavior on site, and the only way to properly account for those other factors is through the use of a control group.
For example, suppose you are running an email campaign to your existing customer base, and during the course of the campaign, the company is also running a PPC campaign, a Superbowl ad, and during this same time a major news event related to the company breaks. When reporting on the effectiveness of the email ads, how would you account for those other (possibly significant) factors?
In many cases, those other factors are ignored, or multiple promotion efforts will ALL try to take credit for the conversions. The solution, according to Jim, is to instead segment your target audience into two groups: the control group, consisting of perhaps 10% of the audience selected at random, and the test group, which contains the remaining 90%. The email is sent to the test group, and the control group is NOT mailed. Then, BOTH groups are tracked and the value added by the email campaign is then calculated. (Note that this is basically an A/B test where one of the tests cases is to do nothing special in promotion).
In this example, some members of the control group still came back to the site and purchased. Some members of the test group also purchased. The LIFT from the email campaign is the delta between these values which is due to the effect of the email campaign. Since the control and test groups are both subjected to the other influencing factors (some of both groups watch the Superbowl ad, for example), and the control group is a random subset of the audience, this outside influence should effect both groups equally, and can therefore be safely ignored.
An example might help: assuming an initial list of 2.2 Million customers, we send an email to 2 million, and specifically do not send the email to the control group of 200,000. Once the email campaign has run its course, we then compare the return from each group. In this example, the control group generated $2.00 per email sent, and the test group generated $2.20 per email. The true return on the email campaign is not 2M X $2.20 = $4.4M, however, since some percentage of those customers might have returned to the site and purchased even without the email push.
Since the control group indicates that much of the audience would have purchased anyway, the true gain from the email campaign is $2.20 - $2.00 = $0.20 per email (or $400,000). Note that it DOES NOT MATTER what other outside influences occurred during this time, predicted or not. As long as the audience as a whole is exposed to these influences equally (which can be assumed from the random sampling), these influences do not alter the conclusion.
There are several benefits of this approach because it captures value beyond the tracked results:
- The data is purer – not influenced by other marketing
- Management easily understands the variance reporting
A further advantage is that when used with a pool of your “best” customers, one can use this approach to test the effectiveness of non-response campaigns such as:
- birthday cards w/out coupons
- we love you/KISS campaigns
- “just calling to see if you need anything” customer contact
- special events
This approach makes a critical change to the web analytics as well:
- The source of change is not questionable
- you can take ALL due credit
- variance-style analysis creates confidence
- pure, clean, simple to explain/understand/defend
A further benefit is that this approach is reusable: regardless of the type of campaign, the size, delivery mechanism, etc. this variance-style reporting can be used to present the results, so it is easier to present to management. A/B or multivariate tests, direct mail campaigns, etc. can also be presented to management using the same style.
The biggest concern for this approach is to be careful not to “poison” the control group by singling them out for other marketing messages. Organizational and financial stamina is needed to avoid this. Note that it is OK to market to the control group with other campaigns, just make sure that the test and control groups are treated as a single block with respect to any other marketing efforts.
Part 2 - Provide Strategic Financial Insight
The second part of Jim’s presentation centered on how to provide strategic insight for management. Each visitor/customer has two value components:
- Value on the books - realized value
- Value from future activity - potential value
The realized value is a backward-looking metric that is based on an event quantity (volume, frequency, sales, profit) based on what happened (a customer action). The potential value is an event prediction, a future view based on the likelihood of an event happening base on customer experience; a prediction based on patterns of customer behavior that can be used to predict future behavior.
Management cares more about the future than the past, since they can take action on future events. The profit you made today is fine, but if you are only looking backwards, you do not see the whole picture.
Jim concluded with a concrete example of how to implement this strategic insight. Here is the recipe:
Step 1: pick some parameters (usually working in conjunction with Finance) to define a “Best” customer. For example, we will look at the amount spent by the customer, and how recently they have been active on the site. In this case, a good customer will have spent over $100, and will have visited within the last six months.
Step 2: Aligning these two metrics on a 2D graph yields four categories of customers:
- Best (spent more than $100 and visited in the last six months)
- New (visited recently, but haven’t spent enough yet)
- Former Good (spent more than $100, but have not visited recently)
- Dreck (or Bad) (have not spent much, haven’t visited recently)
Step 3: Generate a graph with Amount Spent on one access, and Last Visit on the other. Calculate the percentage of your clients that fall into each of the four categories. Now, think about what each of these categories mean, and what marketing you might do to each group to move the percentages in a favorable direction. In general, you want the Good and New customer percentages rising and the Former Good and Dreck percentages dropping. For each group, develop a marketing strategy and message. The potential value contributed by each of these groups is where the company growth is located.
Why do the C-level managers care? Potential value component is predictive (future-looking) and by growing the Good group, you are ensuring strong future performance for the site. This will instill confidence. Note that the managers will likely need to see this perform before they believe the predictions, but since you can standardize on this reporting approach, they will gain confidence in the predictions and it in turn will allow them to forecast upturns/downturns rather than report on them after they happen.
For Search Marketing Gurus, Mike Churchill of KeyRelevance Search Engine Marketing