Mel Sokotch

The problem with market research, and how to mitigate it

One side benefit of a national election (some might say, side effect) is that we get to read new polls almost every day. If you're in marketing, it's a great laboratory to see which "pitch" is working, which is not.  But there's a downside: We're occasionally reminded that research is, well, research. It's not perfect, and it's sometimes confounding.

Here, for example, are results from two recent polls--each conducted by reputable organizations, each based on large national samples, each fielded on the same exact days, each asking the same exact question, and each delivering significantly different results:

The President's Approval Rating

Poll Date Sample Approve Disapprove Spread
CBS/NY Times 3/7-3/11 1009 A 41% 47% +6%
Pew Research 3/7-3/11 1503 A 50% 41% -9%

Needless to say, both surveys can't be right, and it's entirely possible both are wrong.

What, then, is a marketing executive to do?  Big marketing decisions are always better informed by reliable data. But how does one protect against the possibility the numbers are wrong?

First, and forgive the cliché, "research should never be the judge, it should be an aid to judgment."

Second, to help make better judgments, "quantitative" surveys should always include "qualitative" questions.  In other words, get the vote, but find out why target customers voted as they did.

For example, you're quant testing three print ads for a new laundry detergent, with the key question designed to gauge "purchase interest." It will probably go something like this: "Based on what you just learned, What is the likelihood that you'll try this new detergent?"  Included might be a 5-point scale: Definitely, Probably, Maybe/Maybe Not, Probably Not, Definitely Not.

This "closed-end" question should then be followed by an "open-ended" question: "In a few sentences, Can you explain why you answered as you did?"

OK, now the numbers come in, and one print ad scores highest for "purchase interest." If logic holds, this high scoring ad will generate the most positive "verbatims."  That would corroborate that the top ad is indeed the top ad. But if the top ad doesn't get the most positive verbatims, or it's a mixed bag, or it gets chosen for the wrong reasons, then some re-thinking and probably some re-working is called for.

To sum up, data should inform important marketing decisions, but quantitative survey results are not fool proof. That's why your best hedge is to make sure key closed-end questions are followed by open-ended questions. In other words, get the vote, but find out why target customers voted as they did.
Share |
Contact | About | Blog | Seminars | Consulting | Subscribe | Bio | Privacy | Buy Book