Measuring the performance and financial impact of marketing has been (and remains) a major challenge for marketing leaders. In the 2020 Marketing Measurement and Attribution Benchmark Survey by Demand Gen Report, 54% of surveyed marketers said their ability to measure marketing performance and impact needs improvement or is poor/inadequate. The comparable percentage was 58% in the 2019 edition of the survey and 54% in the 2018 survey.
Marketing leaders widely agree about why they need a better process for measuring marketing performance. Seventy-five percent of the respondents in the Demand Gen survey identified the need to show marketing's impact on pipeline and revenue, and 58% cited the need to show ROI from all marketing investments.
Over the past two-plus decades, technological advances have significantly improved our ability to measure some aspects of marketing performance. Today, for example, most forms of digital marketing are highly "trackable." We can know who has opened our emails and who has viewed our content. We can even know how much time was spent with our content.
But, measuring the financial impact of marketing remains particularly difficult because of several inherent characteristics of marketing. A recent article at the Harvard Business Review website captures some of the difficulties:
"Marketing's environment is typically much 'noisier' that the factory floor in terms of unknown, unpredictable, and uncontrollable factors confounding precise measurement. Marketing activities can also be subject to systems effects where the portfolio of marketing tactics work together to create an outcome . . . Marketing actions may also work over multiple time frames . . . Finally, it is often difficult to attribute financial outcomes solely to marketing, because businesses frequently take actions across functions that can drive results."
An Important Perspective from Google
Last year, Google published a white paper that addresses the vital topic of measuring marketing performance. The paper is appropriately titled "Three Grand Challenges" because the authors focus on three of the most gnarly challenges relating to the measurement of marketing effectiveness.
The three "grand challenges" described in the Google paper are:
- "Incrementality: proving cause and effect"
- "Measuring the long term, today"
- "Unified methods: a theory of everything"
The authors acknowledge that no perfect solutions for these challenges currently exist. In fact, the main objective of this paper was to discuss the areas where current effectiveness measurement methods are "running up against the boundaries of the possible."
Given the importance of this topic, I'll be devoting three posts to the issues described in the Google white paper. This post will cover the first of Google's three "grand challenges."
The Cause and Effect Conundrum
The most fundamental challenge in measuring marketing effectiveness is demonstrating the existence of a valid cause-and-effect relationship between a particular marketing activity and a particular business outcome (i.e. revenue/sales). In marketing, such causal relationships are often impossible to "prove" directly. Instead, we must infer causation, and the challenge is to make sure that our inferences are based on valid evidence.
The Google authors noted that "randomized controlled experiments" are the gold standard for measuring causal effects. These experiments are similar to the clinical trials that are being used to test prospective COVID-19 vaccines. In the vaccine trials, participants are randomized into two groups, one of which receives the vaccine, and one of which receives a placebo. Then the vaccine developer tracks how many people in each group contract COVID-19 to measure the vaccine's effectiveness.
To run a randomized controlled marketing experiment, the first step is to identify a set of test subjects (i.e. potential buyers) who are as similar as possible. The test subjects are then randomly assigned to a test group or a control group. The marketing activity being tested is used with the test group, but not with the control group. The difference between the groups in the desired outcome (i.e. sales) is the estimated effect of the marketing activity.
Unfortunately, randomized controlled marketing experiments are not easy to use. For example:
- They must be carefully designed to eliminate extraneous factors that could impact the results.
- They can be expensive and difficult to administer.
- They typically can test only one or two activities at a time.
As a result, such experiments aren't frequently used.
When randomized controlled experiments aren't (or can't be) used, marketers typically rely on historical (a/k/a "observational") data to measure marketing effectiveness. Marketing mix modeling and attribution modeling are two measurement methods that are based on observational data. The results produced by observational methods aren't as reliable as those from randomized experiments, but they are widely used.
To establish reasonable expectations for marketing measurement and build credibility in the C-suite, marketing leaders need to have open, frank, and evidence-based conversations with other C-level executives about which aspects of marketing can be measured precisely, and which aspects still require the use of assumptions, correlations and probabilities.
Image Source: Google
No comments:
Post a Comment