Surveys play an important role in thought leadership marketing. If they answer questions your clients are pondering in a field in which you have expertise and provide services, they create interest, credibility and leads. Also, if the results are novel and believable, journalists will often report on them because they’re easy to turn into stories. Journalists love easy, and their editors love numbers as they indicate that the author isn’t just opining.
But surveys also are easy to do badly. There are many survey reports that get no traction in the marketplace because they don’t say anything that anyone wants to read. Here are five things to do to make sure your survey will be a hit:
1. Make sure you pass the “so what?” test
Here are two key findings from recent surveys on social media that will catch people’s attention: “Companies with the greatest benefits from social media have multiple functions that work together on it” and “The most effective consulting marketers train their consultants in how to use social media.” These findings are ones the audience is not likely to know and tell them something they can do to improve their performance. On the other hand, here are some key findings that no one is going to care about: “71% of online adults use Facebook”; “Social media is a critical data source”; “Everyone is a mobile consumer.” These findings are irrelevant, or obvious, or both. For your survey to grab people’s attention, you have to pick something interesting to investigate at the outset.
2. Nail your hypotheses before you write the questions
A survey should be easy for the respondent to answer. Perhaps because of this, people often think that good surveys are easy to write. They’re not. A good survey is one that asks questions that confirm or refute individual hypotheses (e.g. companies that manage risks well in emerging markets do certain things that others don’t) that sit under an overall hypothesis (e.g. emerging markets present a unique and potent combination of risks) that align with services the company can provide, and address a problem its clients have. These questions can’t be written in an afternoon. In fact, you have to work that whole sequence back the other way: client problem > firm’s services > overall hypothesis > individual hypotheses > questions. You have to do it that way, that is, if you want the data you collect to tell a story. And, of course, you do.
3. Make it multidimensional
A single set of questions directed at an undifferentiated population can lead to interesting results. If, however, you can compare those results across industries, geographies, or companies that use CRM versus those that don’t, you can make unexpected discoveries. Questions that let you separate the results to make those comparisons are called banner questions. All surveys should have some.
4. Distinguish the leaders from the laggards
Executives don’t care what percentage of companies use an enterprise risk management system, or whether that percentage is growing (though of course, ERM vendors do). They might care, however, what companies that manage risk do differently from those that don’t, and whether they are more likely to use an ERM. You can answer that question if you build in one or more banner questions to distinguish the leaders from the laggards. For instance, ask how many incidents have caused losses over the past year, and then normalizing for revenue or industry, compare the habits of the quartile with the least losses (your leaders) against the quartile with the most (those poor laggards). As a double-check, you can ask respondents how well they think they manage risk and divide them that way, too.
5. Make sure it’s statistically valid
Most surveys ask a sample of people (say 100 IT managers) and draw conclusions about all of them (tens of thousands in the US alone). The reliability of the findings depends on how big the sample is. A sample of 1,000 has a margin of error of ±3%, but a sample of 100 has a margin of error of ±10%. A sample of 20 has a margin of ±20% (unless there are only 20 altogether; for example, CEOs of large auto companies, in which case the error is ± zero, presuming the CEOs aren’t lying). So make sure you have enough respondents to make your conclusions reliable, especially if you plan to cut the data with banner questions, which will yield smaller samples, each with a bigger margin of error.
And here are five things to avoid:
6. Don’t ask questions we already know the answers to
If you want to know how many people use LinkedIn for work, all you have to do is Google it or do a mental review of 10 friends to find the answer. In which case I am sure the answer will be at least 80%. We have seen a veritable torrent of surveys asking people if they use LinkedIn, Twitter, and Facebook for several years now. We don’t need to augment the flood.
7. Don’t ask them what they think
One of the weaknesses of surveys is they reflect what people think is happening rather than what’s actually going on. Even if you ask respondents something concrete like how many customer complaints they had last month, you can only expect an estimate. No one is going to rummage through the files to find an exact number for your survey. That’s OK so long as you recognize that’s what you’re getting. But you will exacerbate this problem if you ask people what they think about something for which they have little or no data, such as how important social media will be next year. It’s cheaper and just as informative to ask your kids.
8. Don’t ask self-serving questions
One of my pet peeves is survey questions that ask people which way their budget is trending and whether they will outsource more next year. These questions are staples of companies that sell technology or augment clients’ staff with their consultants. Recent examples: “CIOs at midmarket and large companies in Europe and the U.S. will spend 4.5 percent more on IT products and services this year,” and “The expected future state of all business functions will show an increase in outsourcing.” Vendors ask these questions to support their own budgeting processes. In addition, they’ll flaunt the results in their reports if the numbers show an upward trend, both to please their own investors and make clients feel like laggards if they don’t spend more. I once heard a CIO say that these kinds of findings have an aroma he calls “vendor stink.”
9. Don’t rely on the data alone
A report filled with paragraph after paragraph of survey data makes for dry reading. That’s because a survey can only tell you what is happening, such as whether consulting firms are cutting back on their social media spending, but it can’t tell you why. For that answer, you need to pick a few respondents who did cut back and ask them. You can also support this discovery of social media cutbacks with historical data by conducting secondary research if you don’t have a five-year history of doing the same survey. By combining all three – the survey, interviews, and some public data – you could pull together a pretty good story. Don’t cop out by substituting valuable insight with infographics. Dull data doesn’t suddenly become interesting because a graphic designer got creative. Check out this example if you don’t believe me.
10. Don’t give up ownership
If you don’t have the resources to conduct a survey, you’re probably thinking of engaging a third party to do it for you. Many of those third parties will insist on “co-branding” the report, such as CFO Magazine and the EIU. But as Chris Koch of SAP said in a recent interview, “The problem with that is it only builds credibility for CFO or EIU. It creates goodwill but not demand-generation because it’s not your survey, it’s the EIU’s; it’s not your thinking, it’s theirs.” Instead of hiring a company that’s going to plaster its own name on your survey, use a panel research company such as ResearchNow or SSI that can take your survey design, build and field the survey, and return the results for you to analyze and report on. Their name need not appear on the report. Nor ours if you work with us. Thought leadership should make you look good, not your paid help.
Building a first-rate bridge
In this age of SurveyMonkey and email, surveys are easy to construct and to field. But they are still hard to do well. People often hope or presume that they will discover insights when they trawl through their data. It doesn’t work that way. To get interesting insights you have to anticipate them or at least have an idea where they might be lurking when you design your survey.
There’s nothing in these 10 Do’s and Don’ts that has to do with how you analyze, present, or communicate the results once you have them. This is all about the prep-work. As Abe Lincoln once said, “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” Sharpen well and you’ll not only fell a tree, but you‘ll also build a first-rate bridge to your customers, too.