I have a research pet peeve. A few close friends have heard me rant about this topic, but for the most part, I’ve kept my feelings to myself….until now. But, this blog seems like a perfect platform for me to air my views. Why? Because I am hoping to get some good input that might change my perspective, or at least broaden my view.
So what is this secret pet peeve? Industry benchmarks and people who misuse them.
Here are some things that bug me about industry benchmarks:
- Companies don’t or can’t measure. The companies I work with always want to benchmark their performance. For example, they want to know benchmarks such as the % of revenue influenced by marketing. However, when I do a survey to collect the peer data to establish the benchmark, I find that few marketing organizations are able to reliably measure marketing’s influence on revenue. I have the same problem when I ask questions about pipeline conversion rates. So herein lies the dilemma: If most companies don’t yet measure and can’t provide the data, where are the benchmarks coming from? How many companies are actually contributing to the “industry’ benchmark?
- It depends. How meaningful is an industry benchmark on email open and click-through rates? (A very common question I get.) According to Constant Contact, the industry average email open rate for retail is 17%. My husband has a retail business. He uses Constant Contact. Only his email open rate is 47%. Based on these stats, my husband appears to have superpowers! He doesn’t. What he does have, though, is an in-house, opt-in list of very loyal customers. The benchmark for a purchased email list would be very different. When benchmarking, there are so many factors to consider.
- Ambiguity. This is what I refer to as a “non-benchmark benchmark.” Here’s an example: “By the time a buyer talks to a sales person, that buyer is 70% through the buying process.” Sounds good. Conveys undeniably important information and is probably even true. But, how was this actually measured? 70% of what? Time? Number of buying stages? Did this benchmark come from surveying buyers or vendors? Is this really measurable? My guess is that the originator of this benchmark (and I wish it was me that came up with this often-quoted snippet!) had sound methodology, but through viral use, the benchmark has been bastardized.
“Wait!” you say. “Julie, you have been publishing industry benchmarks as part of ITSMA for 15+ years!”
That’s true. But when I publish a benchmark, I try to be completely transparent. You will know:
- How many companies/respondents contributed to the benchmark
- Who those companies/respondents were
- The exact question they answered
I strive to get a homogeneous group of carefully screened organizations contributing to the benchmark and further cut the data to make the data even more specific by type and size of organization, or other relevant characteristics. The data is not perfect and I never claim it to be so. My services provider benchmark data consists of averages of ITSMA members’ best educated estimates.
In fact, I have a little speech I use when it comes to benchmarking B2B services marketing budgets and performance metrics:
- Benchmarking is both an art and a science
- Common definitions are always evolving (No two marketing organizations that I have ever worked with define the scope of marketing and their marketing budgets the same exact way.)
- Metrics can and will be imprecise
Consequently, benchmarking is like comparing apples and oranges. However, with experience, benchmarking can be like comparing apples to apples. But don’t fool yourself—you’ll still be comparing Red Delicious apples to Granny Smiths, Galas, Honey Crisps, Yellow Delicious, Fuji, and McIntosh.
What do you think?