Sunday, August 29, 2021

Revenue Growth Is "Job 1" for B2B Marketers


Over two decades ago, Sergio Zyman, the one-time chief marketing officer of Coca-Cola, wrote, "The sole purpose of marketing is to get more people to buy more of your product, more often, for more money . . . Yes, you need to advertise and create images that you hope customers will like and remember in the store or at the register, but the only reason to spend money on them is if they help you sell more stuff." (The End of Marketing As We Know It, 1999) 

In the B2C world, marketing has always played the starring role in driving revenue growth. In contrast, marketing was not seen as the primary driver of revenue growth in most B2B companies. Sales typically owned the revenue line of the income statement, and marketing's primary role was to support the salesforce.

The role of marketing started to change in the early 2000's, largely because of three factors.

First, the volume of online information began growing exponentially. The abundance of easily accessible online information enabled business buyers to research potential purchases on their own, which greatly reduced their dependence on vendor sales reps. In essence, business buyers started to learn about products and services through content resources rather than through interactions with a salesperson.

Second, B2B marketing automation solutions made their appearance in the early 2000's. These technologies enabled B2B marketers to automate lead generation and lead nurturing programs and to implement sophisticated lead scoring systems. The adoption of B2B marketing automation applications meant that marketers could take responsibility for a larger portion of the buying process.

Third, and more recently, ecommerce has made significant inroads in B2B buying, and it's growing rapidly. By one estimate, B2B ecommerce sales will be $7.72 trillion in 2021, and they are forecast to reach over $25 trillion in 2028. In fact, worldwide B2B ecommerce revenues are now six times larger than B2C ecommerce revenues.

Revenue Growth Becomes a Primary Marketing Mandate

By the second decade of the 2000's, driving revenue growth had become one of the primary mandates of the marketing function, as evidenced by two research surveys conducted in 2016.

Today, the pressures on marketers to deliver consistent revenue growth have become even more intense. A 2021 global survey of marketing leaders by the CMO Council found that marketers are now responsible for 44% of revenue on average. That's up from just over 10% of revenue in the early 2000's. This study was apparently not limited to B2B marketers, so it's not possible to determine from this research what percentage of revenue B2B marketers are responsible for.
The CMO Council study also found that nine out of ten marketers are expected to grow revenue this year, and 63% of the survey respondents said they and their marketing team are under very high or extreme pressure to deliver on revenue targets.
Are Marketers Meeting the Revenue Growth Challenge?
So how well are marketers meeting the revenue growth challenge? On this point, recent data paints a mixed picture. For example, in the CMO Council study just discussed, over half (53%) of the respondents said their CEO is only moderately satisfied with marketing's performance.
Other research by the CMO Council confirms the mixed picture. In a 2021 survey of senior management executives, 17% of the respondents said they were extremely confident that their marketing function can lead a growth recovery in 2021. However, 52% of the survey respondents said they were only moderately confident in marketing's ability to lead growth this year, and another 29% reported being only somewhat confident.
Findings from The CMO Survey indicate that marketers have not yet taken a leadership role in many business activities that have a major impact on revenue growth. In the February 2020 edition of the survey, senior marketing leaders at U.S. companies were asked what activities marketing is primarily responsible for in their organization. The four activities more frequently identified by survey respondents were:
  1. Brand (90.0% of respondents)
  2. Digital marketing (86.0%)
  3. Advertising (86.0%)
  4. Social media (80.7%)
On the other hand, only 32.7% of the respondents indicated that marketing is primarily responsible for revenue growth, and even fewer respondents said that marketing has primary responsibility for market entry strategies (31.3%), new products (22.7%), and market selection (18.0%).
There is no single "silver bullet" solution that will enable marketers to immediately fulfill their revenue growth responsibilities. But they can take steps to improve their ability to impact revenue growth. I'll discuss one of those steps in my next post.

Image courtesy of Kari Bluff via Flickr (CC).

Sunday, August 22, 2021

Marketing Week Article Takes Aim At Account-Based Marketing


Marketing Week
published an article earlier this month that is sure to provoke a strong response from proponents of account-based marketing. In "Account-based madness:  The new craze in B2B," authors Jon Lombardo and Peter Weinberg fire a broadside at ABM, calling it an "unholy monstrosity."

The authors reluctantly acknowledge that ABM is "a pretty decent idea" if it's done correctly. But they also contend that ". . . almost no one in B2B is doing ABM right."

Lombardo and Weinberg define ABM as ". . . a strategy in which the marketing department delivers personalised communications to best-fit accounts, which are prioritised based on data from the sales team." The authors note that this "seems to be" the most common definition of ABM, and they refer to it as "bad ABM."

Lombardo and Weinberg write that, ". . . bad ABM is actually three bad ideas - personalisation, hypertargeting, and loyalty marketing - mashed into one unholy monstrosity."

Here's how the authors describe the three "bad ideas" of "bad ABM."

Personalization - According to Lombardo and Weinberg, bad ABM assumes that every account has unique needs and that content personalized for each account will drive better marketing performance. The authors contend that, ". . . personalised creative does not outperform generalised creative, despite many unsubstantiated claims to the contrary." And they argue that added cost and complexity will cancel out any benefits of personalization.

Hypertargeting - Lombardo and Weinberg say that bad ABM also assumes that targeting the right customers is more profitable than targeting all potential customers. But they argue that, ". . . the best available evidence suggests that B2B brands grow by reaching every buyer in the category."

Loyalty Marketing - The third "bad idea" is that bad ABM assumes that marketing will produce more growth by targeting a few large accounts rather than a larger group of accounts of all sizes. The authors contend that this assumption is dead wrong.

Lombardo and Weinberg offer three suggestions for transforming "bad ABM" into "good ABM."

Target the Category - Good ABM targets all the potential buyers in the relevant category, not just a narrow subset of buyers.

Avoid Over-Personalization - Good ABM features messages and stories that cover the most common buying situations applicable to all potential category buyers.

Avoid Hypertargeting - Good ABM seeks to reach both large and small buyers.

The authors summarize their position in unequivocal terms:  ". . . broadly targeting a massive set of customers with the same message isn't a bad marketing strategy. It's the most effective marketing strategy. It's how almost every brand in human history has been built. . . It's an old strategy, yes, but it's old for a reason - it works."

What's Wrong With This Picture?

It would be easy to dismiss this article as expressing views on account-based marketing that are held by only a very small minority of B2B marketers. I disagree with most of the points made in the article, but I also think it's worthwhile to place the authors' views in context.

Jon Lombardo and Peter Weinberg are both "Global Leads" at The B2B Institute, a think tank funded by LinkedIn. For the past several years, The B2B Institute has been a strong proponent of brand marketing by B2B companies, and it has published several content resources by brand marketing advocates such as Les Binet and Peter Field (e.g. The 5 Principles Of Growth In B2B Marketing). 

The B2B Institute has also published several papers written by researchers at the Ehrenberg-Bass Institute for Marketing Science. (Note:  Byron Sharp, the author of How Brands Grow, is probably the most widely-known marketing thought leader working at Ehrenberg-Bass.) The Ehrenberg-Bass approach to marketing emphasizes the importance of brand building and more specifically, the importance of concepts such as mental availability, distinctiveness, and brand salience.

I find much of the content published by The B2B Institute to be persuasive and compelling, and I agree that most B2B companies are probably under-investing in long-term, broad-reach brand marketing and over-investing in short-term, highly-targeted demand generation marketing.

I suspect that Lombardo and Weinberg were motivated by this belief in writing the article. It's also not surprising that the marketing principles discussed in the Marketing Week article line up closely with the perspectives of Binet, Field and Ehrenberg-Bass. But the attack on "bad ABM" is ultimately misguided, and what the authors call "good ABM" really isn't ABM at all.

When ABM is used under the right circumstances and in the right ways, it can be a vital part of a B2B company's marketing efforts. The effectiveness of account-based marketing has been clearly demonstrated. B2B marketers just need to remember that ABM isn't the only type of marketing they need to be using. That's the point Lombardo and Weinberg should have emphasized.

Image courtesy of emiliokuffer via Flickr (CC).



Sunday, August 15, 2021

An Updated Look At the State of Marketing Budgets

Image Source:  Gartner, Inc.

Last month, Gartner released the findings of its CMO Spend Survey, 2021. The firm hosted a webinar and published a report discussing the findings of this research.

The Gartner survey was conducted from March through May of this year. The survey produced 400 responses from marketing decision makers and influencers located in North America, Europe and the UK. Respondents were with organizations operating in nine industry verticals, and 81% were with organizations having $1 billion or more in annual revenue. Therefore, this survey primarily captured the perspectives and experiences of marketers in large enterprises.

Gartner expressly noted that the results of this survey do not represent the market as a whole, but only reflect the sentiments of the respondents surveyed.

Marketing Budgets Fall

The "headline" finding of the Gartner research points to a significant decline in marketing budgets as a proportion of company revenue. Gartner found that the mean percentage of total company revenue allocated to marketing in 2021 is 6.4%, down from an average of 11% in 2020.

The mean percentage of revenue allocated to marketing in 2021 is about the same in both B2B and B2C companies. The mean percentage for B2B companies represented in the survey was 6.2%, while the mean percentage for B2C companies was 6.6%.

Gartner's research also revealed that marketing budgets (as a percentage of revenue) declined in all nine of the industries represented in the survey, although the impact varied considerably. In manufacturing companies, the mean percentage of total company revenue devoted to marketing fell from 12.7% in 2020 to 5.8% in 2021, a decline of 6.9 percentage points. In contrast, the mean percentage in consumer products companies declined by only 2.5 percentage points.

Digital Channels Dominate

The Gartner survey found the pure-play digital marketing channels now command more than 70% of the total marketing budget in the average company represented in the survey. However, the survey also revealed that companies' investment plans for both digital and offline channels vary considerably.

The following table shows the percentages of B2B survey respondents who said they are increasing and decreasing their investments (2021 vs. 2020) in the marketing channels covered in the survey. These responses indicate that B2B survey respondents are taking diverse approaches to budget allocation decisions.












It's also noteworthy that cost savings was not a primary driver of the changes in channel priorities. Only 24% of the survey respondents said they had reprioritized channels in order to reduce costs.

In the webinar materials, Gartner argued that "the age of the digital versus offline budget has come to an end." The firm noted that 24% of the budget traditionally spent on TV is expected to shift from broadcast and cable TV to streaming video this year.

Other Important Findings

Gartner's survey produced several other interesting findings. Here are a few of the other major results.

Martech Spending - Marketing technology still commands the largest proportion of the marketing budget. In 2021, the mean percentage of the total marketing budget allocated to martech was 26.6%, up slightly from 2020 (26.2%). In addition, more than two-thirds (68%) of the survey respondents said they expect their martech budget to increase further in the next fiscal year.

Budget Allocations for Programs and Operational Areas - Gartner also asked survey participants how they allocated their budget across ten marketing programs and operational areas. The top four programs/operational areas (by mean percentage of budget) were:

  • Digital commerce (12.3%)
  • Marketing operations (11.9%)
  • Brand strategy (11.3%)
  • Marketing analytics (11.0%)
In-Housing Continues - In the 2021 survey, the mean percentage of total marketing budget allocated to external agencies was 23.0%, down slightly from 23.7% in the 2020 edition of the survey. But this slight decline doesn't tell the whole story. The respondents to the 2021 survey reported that 26% of the work previously outsourced to external agencies had been moved in-house over the preceding 12 months.
My Take
Gartner's latest CMO Spend Survey provides several important insights for marketers, but I'm a little perplexed by the survey's finding about the "decline" in marketing budgets. Real-time data regarding total marketing spending is difficult to obtain, but we do have relatively current data about advertising spending. And most of that data indicates that advertising spending is showing a significant rebound compared to 2020. For example:
  • According to Standard Media Index's U.S. Ad Market Tracker, the U.S. advertising economy grew 35.2% in June, compared to June of 2020. This was the fourth consecutive month of double-digit growth compared to the same months of last year. The June figure also represented a slight increase (0.03%). compared to June of 2019.
  • Advertising revenues for the first six months of 2021 at both Alphabet (the parent company of Google) and Facebook grew by about 50% compared to the first six months of last year, according to the companies' earnings reports.
The next edition of The CMO Survey directed by Dr. Christine Moorman at Duke University's Fuqua School of Business should be published later this month. That survey normally asks participants about their marketing budgets as a percentage of company revenue and about changes in the marketing budget in the prior 12 months. It will be interesting to see if the findings of this research line up with the findings of Gartner's survey.

Sunday, August 8, 2021

Surprise! Survey Respondents Don't Always Tell the Truth


 "People don't think how they feel, they don't say what they think, and they don't do what they say."

David Ogilvy, Founder of Ogilvy & Mather

Research surveys have become a prominent feature of the B2B marketing landscape in recent years. The growing need to produce effective thought leadership content, and easy access to free or inexpensive survey technologies have led many B2B marketers to make surveys an integral part of their marketing efforts.

Today's survey technology tools make it relatively easy to create survey instruments and conduct online surveys. But these tools don't eliminate the challenges involved in designing surveys that produce accurate results. In reality, survey research is a complex topic, and expertise is required to do it well. The validity of survey data can be affected by numerous factors, many of which are not intuitively obvious.

This is my third post dealing with a range of issues relating to research surveys and survey reports. The first two posts in the series are available here and here.

In this post, I'll focus on a group of issues that can affect the accuracy of survey responses. These issues are by no means all of the factors that can impair the validity of survey data, but they are important for marketers to understand both as producers and consumers of survey-based content.

Response Biases

Experienced researchers have long been aware that survey respondents don't always answer survey questions accurately or truthfully. Response bias is the general term used to describe several tendencies of survey participants to respond inaccurately to survey questions. These biases can be conscious or subconscious, and they can have a significant impact on the validity of survey data.

Response biases can occur for a variety of reasons, and social scientists have defined several forms of response bias. Here are some of the more common forms.

Acquiescence bias (sometimes called agreement bias or yea-saying) - Acquiescence bias refers to the tendency of some survey participants to select a "positive" response option (e.g. agree rather than disagree) regardless of their actual preference or opinion. The bias is more likely to exist when a survey question asks for an opinion and presents two opposite choices, such as agree/disagree or true/false.

Demand characteristics - This term refers to a type of response bias where survey participants alter their responses simply because they are taking a survey. Social scientists argue this bias arises because some survey participants will try to determine what the purpose of a survey is and then subconsciously change their responses to fit the perceived purpose of the survey.

Question order bias - This term refers to the tendency of some survey participants to respond differently to questions based on the order in which the questions are asked. For example, research has found if survey participants are first asked about their general interest in a given subject, their responses will indicate a greater interest than if they are first asked a specific or technical question about the same subject.

Extreme response bias - Extreme response bias refers to the tendency of some survey participants to only choose the most extreme response options available for survey questions. For example, if a survey provides a series of statements and asks participants to rate their agreement or disagreement with each statement using a 5-point scale, some participants will only give ones or fives as answers.

There are several steps survey designers can take to mitigate the impact of response biases, but none of these techniques is likely to be 100% effective.

The Social Desirability Bias

One type of response bias has become especially relevant for marketers in today's business environment. The social desirability bias refers to the tendency of survey participants to answer survey questions in a manner they believe will be viewed favorably by others versus what they actually think, feel or do.

This bias is more likely to exist when survey questions relate to sensitive personal topics - such as alcohol consumption or drug use - or "hot button" social topics - such  as climate change or racial diversity. The social desirability bias can lead to the over-reporting of "good" or socially acceptable attitudes and behaviors and the under-reporting of attitudes and behaviors survey participants perceive will be viewed as "bad" or socially unacceptable.

It's important for marketers to be aware of the social desirability bias because a growing number of pundits are now contending that corporate social responsibility has become an important aspect of customer buying decisions, and that therefore companies should include "social purpose" messaging in their marketing programs.

At first glance, this argument appears to be supported by a significant body of research. Numerous surveys conducted over the past several years have found that many people - particularly younger people - have become more socially conscious, and that they increasingly expect business organizations to take a more active role in addressing social problems. Here are a few example of this research:

  • In Edelman's Trust Barometer 2021 survey of consumers in 27 countries, 86% of the respondents said they expect the brands they buy to take one or more of several actions. These actions included address social challenges, create positive change in society, address political issues, and make our culture more accepting.
  • In Accenture Strategy's 2019 Global Consumer Pulse survey of consumers in 36 countries, 74% of younger consumers (basically millennials and Generation Z) said they want companies to take a stand on issues "close to their hearts," and more than 50% said they have shifted a portion of their spend away from their current service provider when a company disappointed them due to its words or actions on a social issue.
  • In a 2017 survey of consumers in 16 countries by BBMG and GlobalScan, nearly two-thirds (65%) of the respondents said they want to support companies with a strong purpose.
Social purpose marketing is a controversial subject, and my intent here is not to take either side of the debate. But we know the social desirability bias exists, and therefore some surveys may overstate the importance or value of social purpose marketing. In addition, we also know there is often a discrepancy between what survey participants say they believe or will do and their actual behaviors. That phenomenon is called the "value-action gap," but that's a topic for another blog post.
The real takeaway here is that marketers should be cautious about relying on survey research that is susceptible to the social desirability bias.

Image courtesy of Chris Short via Flickr (CC).

Sunday, August 1, 2021

Why Marketers Should Be On the Lookout For Unjustified Survey-Based Conclusions


This is the second of my series of three posts discussing several issues that can affect the validity of survey findings and/or the credibility of survey reports. As I wrote in my last post, surveys and survey reports have become important marketing tools for many kinds of B2B companies, including those that offer various kinds of marketing-related technologies and services. So, many B2B marketers are now both producers and consumers of survey-based content.

Whether they are acting as producers or consumers, marketers need to be able to evaluate the quality of research-based content resources. Therefore, they need a basic understanding of the issues that can impact the validity of survey results and make survey reports more or less credible and authoritative.

This post will discuss an issue that can easily undermine the credibility of a survey report. My next post will focus on some of the issues that can affect the validity of survey findings.

The "Sin" of Unjustified Conclusions

An essential requirement for any credible survey report is that it must only contain conclusions that are supported by the survey data. This sounds like basic common sense - and it is - but unfortunately too many survey reports include express or implied conclusions that the survey findings don't actually support. In my experience, most unjustified conclusions arise from blurring the lines between correlation and causation.

One of the fundamental principles of data analysis is that correlation does not establish causation. In other words, survey findings may show that two events or conditions are statistically correlated, but this alone doesn't prove that one of the events or conditions caused the other. Many survey reports emphasize the correlations that exist in survey data, but few reports remind the reader that correlation doesn't necessarily mean causation.

The following chart provides an amusing example that illustrates why this principle matters. The chart shows that from 2000 through 2009 there was a strong correlation (r = 0.992568) between the divorce rate in Maine and the per capita consumption of margarine in the United States. (Note:  To see this and other examples of nonsensical correlations, take a look at the Spurious Correlations website.)









I doubt any of us would argue that there's a causal relationship between the divorce rate in Maine and the consumption of margarine despite the strong correlation. These two "variables" just don't have a common-sense relationship.

But when there is a plausible, common-sense relationship between two events or conditions that are also highly correlated statistically, we humans have a strong tendency to infer that one of the events or conditions caused the other. This tendency can lead us to see a cause-and-effect relationship in cases where none actually exists.

Here's an example of how this issue can arise in the real world. A well-known research firm conducted a survey that was primarily focused on capturing data about how the use of predictive analytics impacts demand generation performance. The survey asked participants to rate the effectiveness of their demand generation process, and the survey report includes findings (contained in the following table) showing the correlation between the use of predictive analytics and demand generation performance.






Based on these survey responses, the survey report states:  "Overall, less than one-third of the study participants report having a B2B demand generation process that meets objectives well. However, when predictive analytics are applied, process performance soars, effectively meeting the objectives set for it over half of the time." (Emphasis in original)

The survey report doesn't explicitly state that predictive analytics was the cause of the improved performance, but it comes very close. The problem is, the survey data doesn't really support this conclusion.

The data does show there was a correlation between the use of predictive analytics and the effectiveness of the respondents' demand generation process. But the survey report - and likely the survey itself - didn't address other factors that may have affected demand generation performance.

For example, the report doesn't indicate whether survey participants were asked about the size of their demand generation budget, or the number of demand generation programs they run in a typical year, or the use of personalization in their demand generation programs. If these questions had been asked, we might well find that all of these factors were also correlated with demand generation performance.

The bottom line is, when marketers conduct or sponsor surveys, they need to ensure that the survey report contains only those claims or conclusions that are legitimately supported by the survey data. And as consumers of survey research, marketers must always be on the lookout for unjustified conclusions.


Top image courtesy of Paul Mison via Flickr CC.